Back to Search Start Over

Learning explanations that are hard to vary

Authors :
Parascandolo, Giambattista
Neitz, Alexander
Orvieto, Antonio
Gresele, Luigi
Schölkopf, Bernhard
Publication Year :
2020

Abstract

In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning. We show that averaging gradients across examples -- akin to a logical OR of patterns -- can favor memorization and `patchwork' solutions that sew together different strategies, instead of identifying invariances. To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled. We then propose and experimentally validate a simple alternative algorithm based on a logical AND, that focuses on invariances and prevents memorization in a set of real-world tasks. Finally, using a synthetic dataset with a clear distinction between invariant and spurious mechanisms, we dissect learning signals and compare this approach to well-established regularizers.<br />Comment: From v1: extended 2.2 and 2.3, added details for reproducibility and link to codebase

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2009.00329
Document Type :
Working Paper