Back to Search Start Over

Enhancing Safety in Learning from Demonstration Algorithms via Control Barrier Function Shielding.

Authors :
Yang, Yue
Chen, Letian
Zaidi, Zulfiqar
van Waveren, Sanne
Krishna, Arjun
Gombolay, Matthew
Source :
ACM/IEEE International Conference on Human-Robot Interaction; Mar2024, p820-829, 10p
Publication Year :
2024

Abstract

Learning from Demonstration (LfD) is a powerful method for non-roboticists end-users to teach robots new tasks, enabling them to customize the robot behavior. However, modern LfD techniques do not explicitly synthesize safe robot behavior, which limits the deployability of these approaches in the real world. To enforce safety in LfD without relying on experts, we propose a new framework, SElding with Control barrier fUnctions in inverse REinforcement learning (SECURE), which learns a customized Control Barrier Function (CBF) from end-users that prevents robots from taking unsafe actions while imposing little interference with the task completion. We evaluate SECURE in three sets of experiments. First, we empirically validate SECURE learns a high-quality CBF from demonstrations and outperforms conventional LfD methods on simulated robotic and autonomous driving tasks with improvements on safety by up to 100%. Second, we demonstrate that roboticists can leverage SECURE to outperform conventional LfD approaches on a real-world knife-cutting, meal-preparation task by 12.5% in task completion while driving the number of safety violations to zero. Finally, we demonstrate in a user study that non-roboticists can use SECURE to effectively teach the robot safe policies that avoid collisions with the person and prevent coffee from spilling. [ABSTRACT FROM AUTHOR]

Details

Language :
English
Database :
Complementary Index
Journal :
ACM/IEEE International Conference on Human-Robot Interaction
Publication Type :
Conference
Accession number :
179537384
Full Text :
https://doi.org/10.1145/3610977.3635002