Back to Search Start Over

Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming

Authors :
Bhat, Shreyas
Lyons, Joseph B.
Shi, Cong
Yang, X. Jessie
Publication Year :
2023

Abstract

We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model that enables the robot to learn and adapt to the human's preferences in real-time during their interaction using Bayesian Inverse Reinforcement Learning. We present three strategies for the robot to interact with a human: a non-learner strategy, in which the robot assumes that the human's reward function is the same as the robot's, a non-adaptive learner strategy that learns the human's reward function for performance estimation, but still optimizes its own reward function, and an adaptive-learner strategy that learns the human's reward function for performance estimation and also optimizes this learned reward function. Results show that adapting to the human's reward function results in the highest trust in the robot.<br />Comment: 6 pages, 6 figures, AAAI Fall Symposium on Agent Teaming in Mixed-Motive Situations

Subjects

Subjects :
Computer Science - Robotics

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.05179
Document Type :
Working Paper