Back to Search Start Over

Testing machine learning explanation methods.

Authors :
Anderson, Andrew A.
Source :
Neural Computing & Applications. Aug2023, Vol. 35 Issue 24, p18073-18084. 12p.
Publication Year :
2023

Abstract

There are many methods for explaining why a machine learning model produces a given output in response to a given input. The relative merits of these methods are often debated using theoretical arguments and illustrative examples. This paper provides a large-scale empirical test of four widely used explanation methods by comparing how well their algorithmically generated denial reasons align with lender-provided denial reasons using a dataset of home mortgage applications. On a held-out sample of 10,000 denied applications, Shapley additive explanations (SHAP) correspond most closely with lender-provided reasons. SHAP is also the most computationally efficient. As a second contribution, this paper presents a method for computing integrated gradient explanations that can be used for non-differentiable models such as XGBoost. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*MACHINE learning
*EXPLANATION

Details

Language :
English
ISSN :
09410643
Volume :
35
Issue :
24
Database :
Academic Search Index
Journal :
Neural Computing & Applications
Publication Type :
Academic Journal
Accession number :
167308529
Full Text :
https://doi.org/10.1007/s00521-023-08597-8