Back to Search Start Over

Enhancing Robot Explanation Capabilities through Vision-Language Models: a Preliminary Study by Interpreting Visual Inputs for Improved Human-Robot Interaction

Authors :
Sobrín-Hidalgo, David
González-Santamarta, Miguel Ángel
Guerrero-Higueras, Ángel Manuel
Rodríguez-Lera, Francisco Javier
Matellán-Olivera, Vicente
Publication Year :
2024

Abstract

This paper presents an improved system based on our prior work, designed to create explanations for autonomous robot actions during Human-Robot Interaction (HRI). Previously, we developed a system that used Large Language Models (LLMs) to interpret logs and produce natural language explanations. In this study, we expand our approach by incorporating Vision-Language Models (VLMs), enabling the system to analyze textual logs with the added context of visual input. This method allows for generating explanations that combine data from the robot's logs and the images it captures. We tested this enhanced system on a basic navigation task where the robot needs to avoid a human obstacle. The findings from this preliminary study indicate that adding visual interpretation improves our system's explanations by precisely identifying obstacles and increasing the accuracy of the explanations provided.<br />Comment: 5 pages, 4 figures. This paper is a preprint of an article submitted to the Robot Trust for Symbiotic Societies (RTSS) workshop (ICRA 2024)

Subjects

Subjects :
Computer Science - Robotics

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.09705
Document Type :
Working Paper