Back to Search Start Over

Undesirable Biases in NLP: Addressing Challenges of Measurement.

Authors :
van der Wal, Oskar
Bachmann, Dominik
Leidinger, Alina
van Maanen, Leendert
Zuidema, Willem
Schulz, Katrin
Source :
Journal of Artificial Intelligence Research; 2024, Vol. 79, p1-40, 40p
Publication Year :
2024

Abstract

As Large Language Models and Natural Language Processing (NLP) technology rapidly develop and spread into daily life, it becomes crucial to anticipate how their use could harm people. One problem that has received a lot of attention in recent years is that this technology has displayed harmful biases, from generating derogatory stereotypes to producing disparate outcomes for different social groups. Although a lot of effort has been invested in assessing and mitigating these biases, our methods of measuring the biases of NLP models have serious problems and it is often unclear what they actually measure. In this paper, we provide an interdisciplinary approach to discussing the issue of NLP model bias by adopting the lens of psychometrics — a field specialized in the measurement of concepts like bias that are not directly observable. In particular, we will explore two central notions from psychometrics, the construct validity and the reliability of measurement tools, and discuss how they can be applied in the context of measuring model bias. Our goal is to provide NLP practitioners with methodological tools for designing better bias measures, and to inspire them more generally to explore tools from psychometrics when working on bias measurement tools. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10769757
Volume :
79
Database :
Complementary Index
Journal :
Journal of Artificial Intelligence Research
Publication Type :
Academic Journal
Accession number :
177916311
Full Text :
https://doi.org/10.1613/jair.1.15195