Back to Search Start Over

fairBERTs: Erasing Sensitive Information Through Semantic and Fairness-aware Perturbations

Authors :
Li, Jinfeng
Chen, Yuefeng
Liu, Xiangyu
Huang, Longtao
Zhang, Rong
Xue, Hui
Publication Year :
2024

Abstract

Pre-trained language models (PLMs) have revolutionized both the natural language processing research and applications. However, stereotypical biases (e.g., gender and racial discrimination) encoded in PLMs have raised negative ethical implications for PLMs, which critically limits their broader applications. To address the aforementioned unfairness issues, we present fairBERTs, a general framework for learning fair fine-tuned BERT series models by erasing the protected sensitive information via semantic and fairness-aware perturbations generated by a generative adversarial network. Through extensive qualitative and quantitative experiments on two real-world tasks, we demonstrate the great superiority of fairBERTs in mitigating unfairness while maintaining the model utility. We also verify the feasibility of transferring adversarial components in fairBERTs to other conventionally trained BERT-like models for yielding fairness improvements. Our findings may shed light on further research on building fairer fine-tuned PLMs.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.08189
Document Type :
Working Paper