Back to Search Start Over

Adversarial vulnerability for any classifier

Authors :
Fawzi, Alhussein
Fawzi, Hamza
Fawzi, Omar
Publication Year :
2018

Abstract

Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space. We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets.<br />Comment: NeurIPS 2018

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1802.08686
Document Type :
Working Paper