Thank you to NerdyKeppie for the image
AI systems are being rapidly integrated into core social domains, informing decisions about who gets resources and opportunity, and who doesn’t. These systems, often marketed as smarter, better, and more objective, have been shown repeatedly to produce biased and erroneous outputs. And while much AI bias research and reporting has focused on race and gender, there has been much less attention paid to AI bias and disability.
This is a significant omission. Disabled people have been subject to historical and present-day marginalization, much of which has excluded them from access to power, resources, and opportunity. And such patterns of marginalization are imprinted in the data that shapes AI systems, and embed these histories in the logics of AI, meaning that those who’ve been subject to discrimination in the past, are most at risk of harm from AI in the present.
Earlier this year, AI Now, in partnership with the NYU Center for Disability Studies and Microsoft, convened a workshop to examine issues of AI bias and disability.
Our latest report, Disability, Bias, and AI, draws from that convening, and from a wealth of research from disability advocates and scholars. In it we examine what disability studies and activism can tell us about the risks and possibilities of AI, and how attention to disability complicates our current approach to “debiasing” AI. We also identify key questions that point to productive avenues for research and advocacy, including:
- How can we draw from disability activism and scholarship to ensure that we protect people who fall outside of the “norms” reflected and constructed by AI systems?
- What standards of “normal” and “ability” are produced and enforced by specific AI systems, and what are the costs of being understood as an “outlier”?
- How can we better highlight the (often profound) consequences of being diagnosed and pathologized, whether accurately or not, and provide opportunities to “opt-out” of automated AI diagnoses before such determinations are made?
- What tactics can those working toward AI accountability learn from the disability rights movement, such as identifying architectural and structural bias across physical and digital spaces?
Centering disability in discussions of AI can help refocus and refine our work to remediate AI’s harms, moving away from a narrow focus on technology and its potential to “solve” and “assist,” and toward approaches that account for social context, history, and the power structures within which AI is produced and deployed.
Read more or download our report below:
Cite as: Whittaker, Meredith, Meryl Alper, Cynthia L. Bennett, Sara Hendren, Elizabeth Kaziunas, Mara Mills, Meredith Ringel Morris, Joy Lisi Rankin, Emily Rogers, Marcel Salas, and Sarah Myers West. “Disability, Bias & AI Report.” AI Now Institute, November 20, 2019.