Back to Search Start Over

Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses

Authors :
Lin, Fu
Mittapalli, Rohit
Chattopadhyay, Prithvijit
Bolya, Daniel
Hoffman, Judy
Publication Year :
2020

Abstract

Convolutional Neural Networks have been shown to be vulnerable to adversarial examples, which are known to locate in subspaces close to where normal data lies but are not naturally occurring and of low probability. In this work, we investigate the potential effect defense techniques have on the geometry of the likelihood landscape - likelihood of the input images under the trained model. We first propose a way to visualize the likelihood landscape leveraging an energy-based model interpretation of discriminative classifiers. Then we introduce a measure to quantify the flatness of the likelihood landscape. We observe that a subset of adversarial defense techniques results in a similar effect of flattening the likelihood landscape. We further explore directly regularizing towards a flat landscape for adversarial robustness.<br />Comment: ECCV 2020 Workshop on Adversarial Robustness in the Real World

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2008.11300
Document Type :
Working Paper