Back to Search Start Over

Making CNNs Interpretable by Building Dynamic Sequential Decision Forests with Top-down Hierarchy Learning

Authors :
Wang, Yilin
Yu, Shaozuo
Yang, Xiaokang
Shen, Wei
Publication Year :
2021
Publisher :
arXiv, 2021.

Abstract

In this paper, we propose a generic model transfer scheme to make Convlutional Neural Networks (CNNs) interpretable, while maintaining their high classification accuracy. We achieve this by building a differentiable decision forest on top of CNNs, which enjoys two characteristics: 1) During training, the tree hierarchies of the forest are learned in a top-down manner under the guidance from the category semantics embedded in the pre-trained CNN weights; 2) During inference, a single decision tree is dynamically selected from the forest for each input sample, enabling the transferred model to make sequential decisions corresponding to the attributes shared by semantically-similar categories, rather than directly performing flat classification. We name the transferred model deep Dynamic Sequential Decision Forest (dDSDF). Experimental results show that dDSDF not only achieves higher classification accuracy than its conuterpart, i.e., the original CNN, but has much better interpretability, as qualitatively it has plausible hierarchies and quantitatively it leads to more precise saliency maps.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....aa5e68d5794d015bf7e593374b89f3da
Full Text :
https://doi.org/10.48550/arxiv.2106.02824