1. How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis
- Author
-
Ghosh, Bishwamittra, Basu, Debabrota, Meel, Kuldeep S., National University of Singapore (NUS), Scool (Scool), Inria Lille - Nord Europe, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), School of computing [Singapore] (NUS), and Basu, Debabrota
- Subjects
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI] ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,[STAT.AP]Statistics [stat]/Applications [stat.AP] ,Explainable Artificial Intelligence ,Computer Science - Artificial Intelligence ,Fairness Verification ,Variance decomposition ,[INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG] ,Algorithm auditing ,Machine Learning (cs.LG) ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Artificial Intelligence (cs.AI) ,[STAT.AP] Statistics [stat]/Applications [stat.AP] ,Global sensitivity analysis ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,Fairness AI - Abstract
Fairness in machine learning has attained significant focus due to the widespread application in high-stake decision-making tasks. Unregulated machine learning classifiers can exhibit bias towards certain demographic groups in data, thus the quantification and mitigation of classifier bias is a central concern in fairness in machine learning. In this paper, we aim to quantify the influence of different features in a dataset on the bias of a classifier. To do this, we introduce the Fairness Influence Function (FIF). This function breaks down bias into its components among individual features and the intersection of multiple features. The key idea is to represent existing group fairness metrics as the difference of the scaled conditional variances in the classifier's prediction and apply a decomposition of variance according to global sensitivity analysis. To estimate FIFs, we instantiate an algorithm FairXplainer that applies variance decomposition of classifier's prediction following local regression. Experiments demonstrate that FairXplainer captures FIFs of individual feature and intersectional features, provides a better approximation of bias based on FIFs, demonstrates higher correlation of FIFs with fairness interventions, and detects changes in bias due to fairness affirmative/punitive actions in the classifier. The code is available at https://github.com/ReAILe/bias-explainer., Proceedings of FAccT, 2023
- Published
- 2022