1. Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies
- Author
-
Kim, Sunnie S. Y., Vaughan, Jennifer Wortman, Liao, Q. Vera, Lombrozo, Tania, and Russakovsky, Olga
- Subjects
Computer Science - Human-Computer Interaction ,Computer Science - Artificial Intelligence - Abstract
Large language models (LLMs) can produce erroneous responses that sound fluent and convincing, raising the risk that users will rely on these responses as if they were correct. Mitigating such overreliance is a key challenge. Through a think-aloud study in which participants use an LLM-infused application to answer objective questions, we identify several features of LLM responses that shape users' reliance: explanations (supporting details for answers), inconsistencies in explanations, and sources. Through a large-scale, pre-registered, controlled experiment (N=308), we isolate and study the effects of these features on users' reliance, accuracy, and other measures. We find that the presence of explanations increases reliance on both correct and incorrect responses. However, we observe less reliance on incorrect responses when sources are provided or when explanations exhibit inconsistencies. We discuss the implications of these findings for fostering appropriate reliance on LLMs., Comment: CHI 2025. This version includes the appendix
- Published
- 2025
- Full Text
- View/download PDF