Back to Search Start Over

Understanding Post-hoc Explainers: The Case of Anchors

Authors :
Lopardo, Gianluigi
Precioso, Frederic
Garreau, Damien
Publication Year :
2023

Abstract

In many scenarios, the interpretability of machine learning models is a highly required but difficult task. To explain the individual predictions of such models, local model-agnostic approaches have been proposed. However, the process generating the explanations can be, for a user, as mysterious as the prediction to be explained. Furthermore, interpretability methods frequently lack theoretical guarantees, and their behavior on simple models is frequently unknown. While it is difficult, if not impossible, to ensure that an explainer behaves as expected on a cutting-edge model, we can at least ensure that everything works on simple, already interpretable models. In this paper, we present a theoretical analysis of Anchors (Ribeiro et al., 2018): a popular rule-based interpretability method that highlights a small set of words to explain a text classifier's decision. After formalizing its algorithm and providing useful insights, we demonstrate mathematically that Anchors produces meaningful results when used with linear text classifiers on top of a TF-IDF vectorization. We believe that our analysis framework can aid in the development of new explainability methods based on solid theoretical foundations.<br />Comment: arXiv admin note: substantial text overlap with arXiv:2205.13789

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2303.08806
Document Type :
Working Paper