1. A Comprehensive Study on Evaluating and Mitigating Algorithmic Unfairness with the MADD Metric
- Author
-
Melina Verger, Chunyang Fan, Sébastien Lallé, François Bouchet, and Vanda Luengo
- Abstract
Predictive student models are increasingly used in learning environments due to their ability to enhance educational outcomes and support stakeholders in making informed decisions. However, predictive models can be biased and produce unfair outcomes, leading to potential discrimination against certain individuals and harmful long-term implications. This has prompted research on fairness metrics meant to capture and quantify such biases. Nonetheless, current metrics primarily focus on predictive performance comparisons between groups, without considering the behavior of the models or the severity of the biases in the outcomes. To address this gap, we proposed a novel metric in a previous work (Verger et al., 2023) named "Model Absolute Density Distance" (MADD), measuring algorithmic unfairness as the difference of the probability distributions of the model's outcomes. In this paper, we extended our previous work with two major additions. Firstly, we provided theoretical and practical considerations on a hyperparameter of MADD, named "bandwidth," useful for optimal measurement of fairness with this metric. Secondly, we demonstrated how MADD can be used not only to measure unfairness but also to mitigate it through postprocessing of the model's outcomes while preserving its accuracy. We experimented with our approach on the same task of predicting student success in online courses as our previous work, and obtained successful results. To facilitate replication and future usages of MADD in different contexts, we developed an open-source Python package called maddlib (https://pypi.org/project/maddlib/). Altogether, our work contributes to advancing the research on fair student models in education.
- Published
- 2024