Towards Accessibility-Aware Human-Centered Image Segmentation for People with Disabilities


Sponsoring Agency
College of IST


This proposal aims to design accessible segmentation algorithms to unlock their potential for people with disabilities. Segmentation algorithms partition an image into meaningful regions, assign labels to each region, and are widely used in downstream computer vision tasks, such as autonomous navigation. Unfortunately, these algorithms learn from traditional datasets annotated by humans who lack how "seemingly insignificant" objects impact accessibility for people with disabilities. For example, fire hydrants, snow over sidewalks vs. snow over rooftops, the area near curb cuts vs. the area between the street and sidewalk pose navigational challenges for people with vision impairments. Simply put, images in current datasets neither capture the implicit accessibility impact of physical objects nor do they have sufficient annotations to learn from. We will address this problem by combining human-centered approaches with computational techniques. More specifically, we will investigate (1) how to encode the real-world experience of people with disabilities into a base dataset; (2) how to develop a scalable algorithm that propagates accessibility-related information from that base dataset to existing datasets used by the computer vision community; and (3) how to design techniques to transfer annotations related to accessibility impact from the base dataset to any dataset