Back to Search Start Over

Optimising Equal Opportunity Fairness in Model Training

Authors :
Shen, Aili
Han, Xudong
Cohn, Trevor
Baldwin, Timothy
Frermann, Lea
Publication Year :
2022

Abstract

Real-world datasets often encode stereotypes and societal biases. Such biases can be implicitly captured by trained models, leading to biased predictions and exacerbating existing societal preconceptions. Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias. However, a disconnect between fairness criteria and training objectives makes it difficult to reason theoretically about the effectiveness of different techniques. In this work, we propose two novel training objectives which directly optimise for the widely-used criterion of {\it equal opportunity}, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.<br />Comment: Accepted to NAACL 2022 main conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.02393
Document Type :
Working Paper