Back to Search Start Over

Effects of Explainable Artificial Intelligence Methods on Human Trust and Behavior: A Comparison of Nearest Neighbor, Grad-CAM, and Network Dissection

Authors :
Humer, Christina
Hinterreiter, Andreas
Leichtmann, Benedikt
Mara, Martina
Streit, Marc
Publication Year :
2022
Publisher :
Open Science Framework, 2022.

Abstract

When users interact with autonomous systems based on artificial intelligence (AI) for decision-making it is often unclear how the AI-based system comes to its results. However, researchers as well as policymakers are increasingly demanding that such systems be explainable so that human users can understand how they come to their results. It is expected that such an explainable AI (XAI) will improve team performance and avoid users’ overtrust or distrust in AI-based systems. A group of XAI methods to achieve such an appropriate level of understanding is visual explanations. But which of the wide variety of visual XAI methods work better than others in which situations? In combination with another study (see related OSF project), participants will be introduced to an online game with a virtual mushroom hunt and will be tasked with 1) identifying pictures of mushrooms as edible or inedible/poisonous and 2) picking only edible mushrooms but leaving inedible/poisonous ones. For this task, participants will additionally receive recommendations from an AI-based app showing classification results of mushroom images. In order to understand the effects of different visual explanations, the AI-based app’s interface will be manipulated in terms of explainability with participants receiving 1) no explanations, 2) example-based nearest neighbour explanations, 3) attribution-based Grad-CAM explanations, and 4) concept-based semantic explanations (network dissection). As dependent variables we will assess 1) participants’ identification performance, 2) the correctness of their picking intentions for each mushroom item (i.e., mushroom-picking performance), 3) users’ trust in the system after playing the game, 4) the users’ evaluation of the system, and 5) their intention to use the system. Furthermore, we investigate the relationships of performance and trust with AI knowledge, domain-specific knowledge (mushroom knowledge), as well as propensity to trust.

Details

Database :
OpenAIRE
Accession number :
edsair.doi...........5be30f3396904adc606d2de3b1b52464
Full Text :
https://doi.org/10.17605/osf.io/sd953