Back to Search Start Over

Eye tracking insights into physician behaviour with safe and unsafe explainable AI recommendations.

Authors :
Nagendran, Myura
Festor, Paul
Komorowski, Matthieu
Gordon, Anthony C.
Faisal, Aldo A.
Source :
NPJ Digital Medicine; 8/2/2024, Vol. 7 Issue 1, p1-10, 10p
Publication Year :
2024

Abstract

We studied clinical AI-supported decision-making as an example of a high-stakes setting in which explainable AI (XAI) has been proposed as useful (by theoretically providing physicians with context for the AI suggestion and thereby helping them to reject unsafe AI recommendations). Here, we used objective neurobehavioural measures (eye-tracking) to see how physicians respond to XAI with N = 19 ICU physicians in a hospital's clinical simulation suite. Prescription decisions were made both pre- and post-reveal of either a safe or unsafe AI recommendation and four different types of simultaneously presented XAI. We used overt visual attention as a marker for where physician mental attention was directed during the simulations. Unsafe AI recommendations attracted significantly greater attention than safe AI recommendations. However, there was no appreciably higher level of attention placed onto any of the four types of explanation during unsafe AI scenarios (i.e. XAI did not appear to 'rescue' decision-makers). Furthermore, self-reported usefulness of explanations by physicians did not correlate with the level of attention they devoted to the explanations reinforcing the notion that using self-reports alone to evaluate XAI tools misses key aspects of the interaction behaviour between human and machine. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
23986352
Volume :
7
Issue :
1
Database :
Complementary Index
Journal :
NPJ Digital Medicine
Publication Type :
Academic Journal
Accession number :
178807094
Full Text :
https://doi.org/10.1038/s41746-024-01200-x