Back to Search Start Over

Designing Counterfactual Generators using Deep Model Inversion

Authors :
Thiagarajan, Jayaraman J.
Narayanaswamy, Vivek
Rajan, Deepta
Liang, Jason
Chaudhari, Akshay
Spanias, Andreas
Publication Year :
2021

Abstract

Explanation techniques that synthesize small, interpretable changes to a given image while producing desired changes in the model prediction have become popular for introspecting black-box models. Commonly referred to as counterfactuals, the synthesized explanations are required to contain discernible changes (for easy interpretability) while also being realistic (consistency to the data manifold). In this paper, we focus on the case where we have access only to the trained deep classifier and not the actual training data. While the problem of inverting deep models to synthesize images from the training distribution has been explored, our goal is to develop a deep inversion approach to generate counterfactual explanations for a given query image. Despite their effectiveness in conditional image synthesis, we show that existing deep inversion methods are insufficient for producing meaningful counterfactuals. We propose DISC (Deep Inversion for Synthesizing Counterfactuals) that improves upon deep inversion by utilizing (a) stronger image priors, (b) incorporating a novel manifold consistency objective and (c) adopting a progressive optimization strategy. We find that, in addition to producing visually meaningful explanations, the counterfactuals from DISC are effective at learning classifier decision boundaries and are robust to unknown test-time corruptions.<br />Comment: Neurips 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.14274
Document Type :
Working Paper