1. Gendered competencies and gender composition: A human versus algorithm evaluator comparison.
- Author
-
Merritt, Stephanie M., Ryan, Ann Marie, Gardner, Cari, Liff, Joshua, and Mondragon, Nathan
- Subjects
- *
GENDER , *ARTIFICIAL intelligence , *ALGORITHMS , *HUMAN beings - Abstract
The rise in AI‐based assessments in hiring contexts has led to significant media speculation regarding their role in exacerbating or mitigating employment inequities. In this study, we examined 46,214 ratings from 4947 interviews to ascertain if gender differences in ratings were related to interactions among content (stereotype‐relevant competencies), context (occupational gender composition), and rater type (human vs. algorithm). Contrary to the hypothesized effects of smaller gender differences in algorithmic scoring than with human raters, we found that both human and algorithmic ratings of men on agentic competencies were higher than those given to women. Also unexpected, the algorithmic scoring evidenced greater gender differences in communal ratings than humans (with women rated higher than men) and similar differences in non‐stereotypic competency ratings that were in the opposite direction (humans rated men higher than women, while algorithms rated women higher than men). In more female‐dominated occupations, humans tended to rate applicants as generally less competent overall relative to the algorithms, but algorithms rated men more highly in these occupations. Implications for auditing for group differences in selection contexts are discussed. Practitioner points: Patterns of gender differences in ratings made by humans and algorithms varied across competency types.Gender difference patterns also varied by the jobs' gender composition.Investigations of rating differences between humans and algorithms should consider interview content and context. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF