Back to Search Start Over

Multimodal Affect Recognition for Assistive Human-Robot Interactions

Authors :
Beno Benhabib
Alexander Hong
Goldie Nejat
Yuma Tsuboi
Source :
2017 Design of Medical Devices Conference.
Publication Year :
2017
Publisher :
American Society of Mechanical Engineers, 2017.

Abstract

Socially assistive robots can provide cognitive assistance with activities of daily living, and promote social interactions to those suffering from cognitive impairments and/or social disorders. They can be used as aids for a number of different populations including those living with dementia or autism spectrum disorder, and for stroke patients during post-stroke rehabilitation [1]. Our research focuses on developing socially assistive intelligent robots capable of partaking in natural human-robot interactions (HRI). In particular, we have been working on the emotional aspects of the interactions to provide engaging settings, which in turn lead to better acceptance by the intended users. Herein, we present a novel multimodal affect recognition system for the robot Luke, Fig. 1(a), to engage in emotional assistive interactions. Current multimodal affect recognition systems mainly focus on inputs from facial expressions and vocal intonation [2], [3]. Body language has also been used to determine human affect during social interactions, but has yet to be explored in the development of multimodal recognition systems. Body language has been strongly correlated to vocal intonation [4]. The combined modalities provide emotional information due to the temporal development underlying the neural interaction in audiovisual perception [5]. In this paper, we present a novel multimodal recognition system that uniquely combines inputs from both body language and vocal intonation in order to autonomously determine user affect during assistive HRI.

Details

Database :
OpenAIRE
Journal :
2017 Design of Medical Devices Conference
Accession number :
edsair.doi...........f447456ce80f9aa5676caa72445171b7