1. Disentangling Interactions and Dependencies in Feature Attribution
- Author
-
König, Gunnar, Günther, Eric, and von Luxburg, Ulrike
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
In explainable machine learning, global feature importance methods try to determine how much each individual feature contributes to predicting the target variable, resulting in one importance score for each feature. But often, predicting the target variable requires interactions between several features (such as in the XOR function), and features might have complex statistical dependencies that allow to partially replace one feature with another one. In commonly used feature importance scores these cooperative effects are conflated with the features' individual contributions, making them prone to misinterpretations. In this work, we derive DIP, a new mathematical decomposition of individual feature importance scores that disentangles three components: the standalone contribution and the contributions stemming from interactions and dependencies. We prove that the DIP decomposition is unique and show how it can be estimated in practice. Based on these results, we propose a new visualization of feature importance scores that clearly illustrates the different contributions., Comment: GK and EG contributed equally to this article
- Published
- 2024