1. FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling
- Author
-
Ko, Wei-Yin, D'souza, Daniel, Nguyen, Karina, Balestriero, Randall, and Hooker, Sara
- Subjects
FOS: Computer and information sciences ,Computer Science - Computers and Society ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Statistics - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computers and Society (cs.CY) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Ensembling independent deep neural networks (DNNs) is a simple and effective way to improve top-line metrics and to outperform larger single models. In this work, we go beyond top-line metrics and instead explore the impact of ensembling on subgroup performances. Surprisingly, even with a simple homogenous ensemble -- all the individual models share the same training set, architecture, and design choices -- we find compelling and powerful gains in worst-k and minority group performance, i.e. fairness naturally emerges from ensembling. We show that the gains in performance from ensembling for the minority group continue for far longer than for the majority group as more models are added. Our work establishes that simple DNN ensembles can be a powerful tool for alleviating disparate impact from DNN classifiers, thus curbing algorithmic harm. We also explore why this is the case. We find that even in homogeneous ensembles, varying the sources of stochasticity through parameter initialization, mini-batch sampling, and the data-augmentation realizations, results in different fairness outcomes.
- Published
- 2023
- Full Text
- View/download PDF