Back to Search
Start Over
Multi-Scale Interpretation Model for Convolutional Neural Networks: Building Trust Based on Hierarchical Interpretation.
- Source :
- IEEE Transactions on Multimedia; Sep2019, Vol. 21 Issue 9, p2263-2276, 14p
- Publication Year :
- 2019
-
Abstract
- With the rapid development of deep learning models, their performances in various tasks have improved; meanwhile, their increasingly intricate architectures make them difficult to interpret. To tackle this challenge, model interpretability is essential and has been investigated in a wide range of applications. For end users, model interpretability can be used to build trust in the deployed machine learning models. For practitioners, interpretability plays a critical role in model explanation, model validation, and model improvement to develop a faithful model. In this paper, we propose a novel Multi-scale Interpretation (MINT) model for convolutional neural networks using both the perturbation-based and the gradient-based interpretation approaches. It learns the class-discriminative interpretable knowledge from the multi-scale perturbation of feature information in different layers of deep networks. The proposed MINT model provides the coarse-scale and the fine-scale interpretations for the attention in the deep layer and specific features in the shallow layer, respectively. Experimental results show that the MINT model presents the class-discriminative interpretation of the network decision and explains the significance of the hierarchical network structure. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 15209210
- Volume :
- 21
- Issue :
- 9
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Multimedia
- Publication Type :
- Academic Journal
- Accession number :
- 138275597
- Full Text :
- https://doi.org/10.1109/TMM.2019.2902099