Back to Search Start Over

MABR: A Multilayer Adversarial Bias Removal Approach Without Prior Bias Knowledge

Authors :
Yin, Maxwell J.
Wang, Boyu
Ling, Charles
Publication Year :
2024

Abstract

Models trained on real-world data often mirror and exacerbate existing social biases. Traditional methods for mitigating these biases typically require prior knowledge of the specific biases to be addressed, such as gender or racial biases, and the social groups associated with each instance. In this paper, we introduce a novel adversarial training strategy that operates independently of prior bias-type knowledge and protected attribute labels. Our approach proactively identifies biases during model training by utilizing auxiliary models, which are trained concurrently by predicting the performance of the main model without relying on task labels. Additionally, we implement these auxiliary models at various levels of the feature maps of the main model, enabling the detection of a broader and more nuanced range of bias features. Through experiments on racial and gender biases in sentiment and occupation classification tasks, our method effectively reduces social biases without the need for demographic annotations. Moreover, our approach not only matches but often surpasses the efficacy of methods that require detailed demographic insights, marking a significant advancement in bias mitigation techniques.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.05497
Document Type :
Working Paper