1. A Computational Model for Managing Impressions of an Embodied Conversational Agent in Real-Time
- Author
-
Beatrice Biancardi, Catherine Pelachaud, Angelo Cafaro, Maurizio Mancini, Chen Wang, Guillaume Chanel, Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Universita degli studi di Genova, Swiss Center for Affective Sciences, University of Geneva [Switzerland], Perception, Interaction, Robotique sociales (PIROS), and Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Computer science ,media_common.quotation_subject ,02 engineering and technology ,computer.software_genre ,Embodied Conversational Agents ,Nonverbal communication ,Human–computer interaction ,Competence ,Perception ,Impression Management ,Machine learning ,0202 electrical engineering, electronic engineering, information engineering ,First Impressions ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Dialog system ,ddc:025.063 ,Competence (human resources) ,ComputingMilieux_MISCELLANEOUS ,media_common ,Facial expression ,Context model ,020206 networking & telecommunications ,Facial Expressions Detection ,Warmth ,Impression ,ddc:128.37 ,Embodied cognition ,020201 artificial intelligence & image processing ,computer - Abstract
This paper presents a computational model for managing an Embodied Conversational Agent's first impressions of warmth and competence towards the user. These impressions are important to manage because they can impact users' perception of the agent and their willingness to continue the interaction with the agent. The model aims at detecting user's impression of the agent and producing appropriate agent's verbal and nonverbal behaviours in order to maintain a positive impression of warmth and competence. User's impressions are recognized using a machine learning approach with facial expressions (action units) which are important indicators of users' affective states and intentions. The agent adapts in real-time its verbal and nonverbal behaviour, with a reinforcement learning algorithm that takes user's impressions as reward to select the most appropriate combination of verbal and non-verbal behaviour to perform. A user study to test the model in a contextualized interaction with users is also presented. Our hypotheses are that users' ratings differs when the agents adapts its behaviour according to our reinforcement learning algorithm, compared to when the agent does not adapt its behaviour to user's reactions (i.e., when it randomly selects its behaviours). The study shows a general tendency for the agent to perform better when using our model than in the random condition. Significant results shows that user's ratings about agent's warmth are influenced by their a-priori about virtual characters, as well as that users' judged the agent as more competent when it adapted its behaviour compared to random condition.
- Published
- 2019