Back to Search Start Over

The Role of Lexical Alignment in Human Understanding of Explanations by Conversational Agents

Authors :
Sumit Srivastava
Mariët Theune
Alejandro Catala
Human Media Interaction
Source :
IUI '23: 28th International Conference on Intelligent User Interfaces
Publication Year :
2023
Publisher :
ACM, 2023.

Abstract

Explainable Artificial Intelligence (XAI) focuses on research and technology that can explain an AI system’s functioning and its underlying methods, and also on making these explanations better through personalization. Our research study investigates a natural language personalization method called lexical alignment in understanding an explanation provided by a conversational agent. The study setup was online and navigated the participants through an interaction with a conversational agent. Participants faced either an agent designed to align its responses to those of the participants, a misaligned agent, or a control condition that did not involve any dialogue. The dialogue delivered an explanation based on a pre-defined set of causes and effects. The recall and understanding of the explanations was evaluated using a combination of Yes-No questions, a Cloze test (fill-in-the-blanks), and What-style questions. The analysis of the test scores revealed a significant advantage in information recall for those who interacted with an aligning agent against the participants who either interacted with a non-aligning agent or did not go through any dialogue. The Yes-No type questions that included probes on higher-order inferences (understanding) also reflected an advantage for the participants who had an aligned dialogue against both non-aligned and no dialogue conditions. The results overall suggest a positive effect of lexical alignment on understanding of explanations.

Details

Database :
OpenAIRE
Journal :
Proceedings of the 28th International Conference on Intelligent User Interfaces
Accession number :
edsair.doi.dedup.....946e74ca107c31e30e47cc8a14851f67