Toward Fairness in AI for People with Disabilities: A Research Roadmap
- Anhong Guo ,
- Ece Kamar ,
- Jennifer Wortman Vaughan ,
- Hanna Wallach ,
- Meredith Ringel Morris
ASSETS 2019 Workshop on AI Fairness for People with Disabilities |
Organized by ACM
AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD). Indeed, improving the lives of PWD is a motivator for many state-of-the-art AI systems, such as automated speech recognition tools that can caption videos for people who are deaf and hard of hearing, or language prediction algorithms that can augment communication for people with speech or cognitive disabilities. However, widely deployed AI systems may not work properly for PWD, or worse, may actively discriminate against them. These considerations regarding fairness in AI for PWD have thus far received little attention. In this position paper, we identify potential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development, and testing. We intend for this risk assessment of how various classes of AI might interact with various classes of disability to provide a roadmap for future research that is needed to gather data, test these hypotheses, and build more inclusive algorithms.
Our Responsibility: Disability, Bias, and AI
Presented by Natasha Crampton and Meredith Ringel Morris at Microsoft's 2020 Ability Summit, Microsoft AI offers tremendous potential for empowering people with disabilities and is already delivering on that promise. Yet, AI also raises new challenges related to fairness and inclusion, which need to be identified and mitigated in a principled and intentional way. Learn about Microsoft's approach to responsible AI, as well as some key research directions for AI and accessibility.