Back to Search Start Over

Fixing confirmation bias in feature attribution methods via semantic match

Authors :
Cinà, Giovanni
Fernandez-Llaneza, Daniel
Deponte, Ludovico
Mishra, Nishant
Röber, Tabea E.
Pezzelle, Sandro
Calixto, Iacer
Goedhart, Rob
Birbil, Ş. İlker
Publication Year :
2023

Abstract

Feature attribution methods have become a staple method to disentangle the complex behavior of black box models. Despite their success, some scholars have argued that such methods suffer from a serious flaw: they do not allow a reliable interpretation in terms of human concepts. Simply put, visualizing an array of feature contributions is not enough for humans to conclude something about a model's internal representations, and confirmation bias can trick users into false beliefs about model behavior. We argue that a structured approach is required to test whether our hypotheses on the model are confirmed by the feature attributions. This is what we call the "semantic match" between human concepts and (sub-symbolic) explanations. Building on the conceptual framework put forward in Cin\`a et al. [2023], we propose a structured approach to evaluate semantic match in practice. We showcase the procedure in a suite of experiments spanning tabular and image data, and show how the assessment of semantic match can give insight into both desirable (e.g., focusing on an object relevant for prediction) and undesirable model behaviors (e.g., focusing on a spurious correlation). We couple our experimental results with an analysis on the metrics to measure semantic match, and argue that this approach constitutes the first step towards resolving the issue of confirmation bias in XAI.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.00897
Document Type :
Working Paper