1. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach
- Author
-
Ferrario, Andrea, Termine, Alberto, and Facchini, Alessandro
- Subjects
Computer Science - Artificial Intelligence - Abstract
Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead to mismatches between designers' intentions and users' perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors, cases of epistemic injustice, and unwarranted trust. To address these issues, we propose enhancing the ST framework with a fifth 'W-question' to clarify the specific social attributions assigned to LLMs by its designers and users. This addition aims to bridge the gap between LLM capabilities and user perceptions, promoting the ethically responsible development and use of LLM-based technology., Comment: Extended version of the manuscript accepted for the ACM CHI Workshop on Human-Centered Explainable AI 2024 (HCXAI24)
- Published
- 2024