Back to Search
Start Over
A multiorder feature tracking and explanation strategy for explainable deep learning
- Source :
- Journal of Intelligent Systems, Vol 32, Iss 1, Pp 1-18 (2023)
- Publication Year :
- 2023
- Publisher :
- De Gruyter, 2023.
-
Abstract
- A good AI algorithm can make accurate predictions and provide reasonable explanations for the field in which it is applied. However, the application of deep models makes the black box problem, i.e., the lack of interpretability of a model, more prominent. In particular, when there are multiple features in an application domain and complex interactions between these features, it is difficult for a deep model to intuitively explain its prediction results. Moreover, in practical applications, multiorder feature interactions are ubiquitous. To break the interpretation limitations of deep models, we argue that a multiorder linearly separable deep model can be divided into different orders to explain its prediction results. Inspired by the interpretability advantage of tree models, we design a feature representation mechanism that can consistently represent the features of both trees and deep models. Based on the consistent representation, we propose a multiorder feature-tracking strategy to provide a prediction-oriented multiorder explanation for a linearly separable deep model. In experiments, we have empirically verified the effectiveness of our approach in two binary classification application scenarios: education and marketing. Experimental results show that our model can intuitively represent complex relationships between features through diversified multiorder explanations.
Details
- Language :
- English
- ISSN :
- 2191026X
- Volume :
- 32
- Issue :
- 1
- Database :
- Directory of Open Access Journals
- Journal :
- Journal of Intelligent Systems
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.fad9a38f70d44fe183931c88823ffd4f
- Document Type :
- article
- Full Text :
- https://doi.org/10.1515/jisys-2022-0212