Back to Search Start Over

Fairness-Sensitive Policy-Gradient Reinforcement Learning for Reducing Bias in Robotic Assistance

Authors :
Zhu, Jie
Hu, Mengsha
Liang, Xueyao
Zhang, Amy
Jin, Ruoming
Liu, Rui
Publication Year :
2023
Publisher :
arXiv, 2023.

Abstract

Robots assist humans in various activities, from daily living public service (e.g., airports and restaurants), and to collaborative manufacturing. However, it is risky to assume that the knowledge and strategies robots learned from one group of people can apply to other groups. The discriminatory performance of robots will undermine their service quality for some people, ignore their service requests, and even offend them. Therefore, it is critically important to mitigate bias in robot decision-making for more fair services. In this paper, we designed a self-reflective mechanism -- Fairness-Sensitive Policy Gradient Reinforcement Learning (FSPGRL), to help robots to self-identify biased behaviors during interactions with humans. FSPGRL identifies bias by examining the abnormal update along particular gradients and updates the policy network to support fair decision-making for robots. To validate FSPGRL's effectiveness, a human-centered service scenario, "A robot is serving people in a restaurant," was designed. A user study was conducted; 24 human subjects participated in generating 1,000 service demonstrations. Four commonly-seen issues "Willingness Issue," "Priority Issue," "Quality Issue," "Risk Issue" were observed from robot behaviors. By using FSPGRL to improve robot decisions, robots were proven to have a self-bias detection capability for a more fair service. We have achieved the suppression of bias and improved the quality during the process of robot learning to realize a relatively fair model.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....d9919c68616d79d53b349cfd24b84f61
Full Text :
https://doi.org/10.48550/arxiv.2306.04167