Back to Search Start Over

Explainable AI Evaluation: A Top-Down Approach for Selecting Optimal Explanations for Black Box Models.

Authors :
Mirzaei, SeyedehRoksana
Mao, Hua
Al-Nima, Raid Rafi Omar
Woo, Wai Lok
Source :
Information (2078-2489). Jan2024, Vol. 15 Issue 1, p4. 41p.
Publication Year :
2024

Abstract

Explainable Artificial Intelligence (XAI) evaluation has grown significantly due to its extensive adoption, and the catastrophic consequence of misinterpreting sensitive data, especially in the medical field. However, the multidisciplinary nature of XAI research resulted in diverse scholars possessing significant challenges in designing proper evaluation methods. This paper proposes a novel framework of a three-layered top-down approach on how to arrive at an optimal explainer, accenting the persistent need for consensus in XAI evaluation. This paper also investigates a critical comparative evaluation of explanations in both model agnostic and specific explainers including LIME, SHAP, Anchors, and TabNet, aiming to enhance the adaptability of XAI in a tabular domain. The results demonstrate that TabNet achieved the highest classification recall followed by TabPFN, and XGBoost. Additionally, this paper develops an optimal approach by introducing a novel measure of relative performance loss with emphasis on faithfulness and fidelity of global explanations by quantifying the extent to which a model's capabilities diminish when eliminating topmost features. This addresses a conspicuous gap in the lack of consensus among researchers regarding how global feature importance impacts classification loss, thereby undermining the trust and correctness of such applications. Finally, a practical use case on medical tabular data is provided to concretely illustrate the findings. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20782489
Volume :
15
Issue :
1
Database :
Academic Search Index
Journal :
Information (2078-2489)
Publication Type :
Academic Journal
Accession number :
175078433
Full Text :
https://doi.org/10.3390/info15010004