1. Automatic annotation of protected attributes to support fairness optimization.
- Author
-
Consuegra-Ayala, Juan Pablo, Gutiérrez, Yoan, Almeida-Cruz, Yudivian, and Palomar, Manuel
- Subjects
- *
MACHINE learning , *FILM reviewing , *FAIRNESS , *NATURAL language processing , *ANNOTATIONS - Abstract
Recent research has shown that the unaware automation of high-risk decision-making tasks can result in unfair decisions being made. The most common approaches to address this problem adopt definitions of fairness based on protected attributes. Precise annotation of protected attributes enables the application of bias mitigation techniques to commonly unlabeled kinds of data (e.g., images, text, etc.). This paper proposes a framework to automatically annotate protected attributes in data collections. The framework focuses on providing a single interface to annotate protected attributes of different types (e.g., gender, race, etc.) and from different kinds of data. Internally, the framework coordinates multiple sensors to produce the final annotation. Several sensors for textual data are proposed. An optimization search technique is designed to tune the framework to specific domains. Additionally, a small dataset of movie reviews —annotated with gender and sentiment— was created. The evaluation in datasets of texts from diverse domains shows the quality of the annotations and their effectiveness to be used as a proxy to estimate fairness in datasets and machine learning models. The source code is available online for the research community. • A framework to automatically annotate protected attributes in datasets. • Techniques to annotate gender in textual collections with fairness considerations. • Optimization search approach for tuning the framework to custom domains. • Small dataset of movie reviews annotated with gender and sentiment. • Effective use of annotations as a proxy to estimate fairness in datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF