Back to Search Start Over

A qualitative field study on explainable AI for lay users subjected to AI cyberattacks

Authors :
McAreavey, Kevin
Liu, Weiru
Bauters, Kim
Ivory, Dennis
Loukas, George
Panaousis, Manos
Chen, Hsueh-Ju
Gill, Rea
Payler, Rachael
Vasalou, Asimina
Publication Year :
2024

Abstract

In this paper we present results from a qualitative field study on explainable AI (XAI) for lay users (n = 18) who were subjected to AI cyberattacks. The study was based on a custom-built smart heating application called Squid and was conducted over seven weeks in early 2023. Squid combined a smart radiator valve installed in participant homes with a web application that implemented an AI feature known as setpoint learning, which is commonly available in consumer smart thermostats. Development of Squid followed the XAI principle of interpretability-by-design where the AI feature was implemented using a simple glass-box machine learning model with the model subsequently exposed to users via the web interface (e.g. as interactive visualisations). AI attacks on users were simulated by injecting malicious training data and by manipulating data used for model predictions. Research data consisted of semi-structured interviews, researcher field notes, participant diaries, and application logs. In our analysis we reflect on the impact of XAI on user satisfaction and user comprehension as well as its use as a tool for diagnosing AI attacks. Our results show only limited engagement with XAI features and suggest that, for Squid users, common assumptions found in the XAI literature were not aligned to reality. On the positive side, users appear to have developed better mental models of the AI feature compared to previous work, and there is evidence that users did make some use of XAI as a diagnostic tool.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.07369
Document Type :
Working Paper