Back to Search
Start Over
Towards Fair Affective Robotics: Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition
- Publication Year :
- 2021
-
Abstract
- As affective robots become integral in human life, these agents must be able to fairly evaluate human affective expressions without discriminating against specific demographic groups. Identifying bias in Machine Learning (ML) systems as a critical problem, different approaches have been proposed to mitigate such biases in the models both at data and algorithmic levels. In this work, we propose Continual Learning (CL) as an effective strategy to enhance fairness in Facial Expression Recognition (FER) systems, guarding against biases arising from imbalances in data distributions. We compare different state-of-the-art bias mitigation approaches with CL-based strategies for fairness on expression recognition and Action Unit (AU) detection tasks using popular benchmarks for each; RAF-DB and BP4D. Our experiments show that CL-based methods, on average, outperform popular bias mitigation techniques, strengthening the need for further investigation into CL for the development of fairer FER algorithms.<br />Comment: Accepted at the Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI) at the 16th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2021. arXiv admin note: substantial text overlap with arXiv:2103.08637
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2103.09233
- Document Type :
- Working Paper