1. Disability Ethics and Education in the Age of Artificial Intelligence: Identifying Ability Bias in ChatGPT and Gemini.
- Author
-
Urbina JT, Vu PD, and Nguyen MV
- Subjects
- Humans, Bias, Artificial Intelligence ethics, Disabled Persons rehabilitation
- Abstract
Objective: To identify and quantify ability bias in generative artificial intelligence large language model chatbots, specifically OpenAI's ChatGPT and Google's Gemini., Design: Observational study of language usage in generative artificial intelligence models., Setting: Investigation-only browser profile restricted to ChatGPT and Gemini., Participants: Each chatbot generated 60 descriptions of people prompted without specified functional status, 30 descriptions of people with a disability, 30 descriptions of patients with a disability, and 30 descriptions of athletes with a disability (N=300)., Interventions: Not applicable., Main Outcome Measures: Generated descriptions produced by the models were parsed into words that were linguistically analyzed into favorable qualities or limiting qualities., Results: Both large language models significantly underestimated disability in a population of people, and linguistic analysis showed that descriptions of people, patients, and athletes with a disability were generated as having significantly fewer favorable qualities and significantly more limitations than people without a disability in both ChatGPT and Gemini., Conclusions: Generative artificial intelligence chatbots demonstrate quantifiable ability bias and often exclude people with disabilities in their responses. Ethical use of these generative large language model chatbots in medical systems should recognize this limitation, and further consideration should be taken in developing equitable artificial intelligence technologies., (Copyright © 2024 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.)
- Published
- 2025
- Full Text
- View/download PDF