1. Likelihood of Questioning AI-Based Recommendations Due to Perceived Racial/Gender Bias
- Author
-
Carlos M. Parra, Denis Dennehy, and Manjul Gupta
- Subjects
business.industry ,media_common.quotation_subject ,Multitude ,Sample (statistics) ,Context (language use) ,General Medicine ,General Chemistry ,Procurement ,Service (economics) ,Health care ,Product (category theory) ,Human resources ,business ,Psychology ,Social psychology ,media_common - Abstract
Advances in Artificial Intelligence (AI) are giving rise to a multitude of AI-embedded technologies that are increasingly impacting all aspects of modern society. Yet, there is a paucity of rigorous research that advances understanding of when, and which type of, individuals are more likely to question AI-based recommendations due to perceived racial and gender bias. This study, which is part of a larger research stream contributes to knowledge by using a scenario-based survey that was issued to a sample of 387 U.S. participants. The findings suggest that, considering perceived racial and gender bias, Human Resource (HR) recruitment and financial product/service procurement scenarios exhibit a higher questioning likelihood. Meanwhile, the healthcare scenario presents the lowest questioning likelihood. Furthermore, in the context of this study, U.S. participants tend to be more susceptible to questioning AI-based recommendations due to perceived racial bias rather than gender bias.
- Published
- 2022
- Full Text
- View/download PDF