Back to Search Start Over

Marked Attribute Bias in Natural Language Inference

Authors :
Dawkins, Hillary
Source :
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 4214-4226 (2021)
Publication Year :
2021

Abstract

Reporting and providing test sets for harmful bias in NLP applications is essential for building a robust understanding of the current problem. We present a new observation of gender bias in a downstream NLP application: marked attribute bias in natural language inference. Bias in downstream applications can stem from training data, word embeddings, or be amplified by the model in use. However, focusing on biased word embeddings is potentially the most impactful first step due to their universal nature. Here we seek to understand how the intrinsic properties of word embeddings contribute to this observed marked attribute effect, and whether current post-processing methods address the bias successfully. An investigation of the current debiasing landscape reveals two open problems: none of the current debiased embeddings mitigate the marked attribute error, and none of the intrinsic bias measures are predictive of the marked attribute effect. By noticing that a new type of intrinsic bias measure correlates meaningfully with the marked attribute effect, we propose a new postprocessing debiasing scheme for static word embeddings. The proposed method applied to existing embeddings achieves new best results on the marked attribute bias test set. See https://github.com/hillary-dawkins/MAB.<br />Comment: 9+4 pages. Published in Findings of the ACL (ACL-IJCNLP 2021)

Details

Database :
arXiv
Journal :
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 4214-4226 (2021)
Publication Type :
Report
Accession number :
edsarx.2109.14039
Document Type :
Working Paper
Full Text :
https://doi.org/10.18653/v1/2021.findings-acl.369