Back to Search Start Over

Adversarial Robustness with Partial Isometry.

Authors :
Shi-Garrier, Loïc
Bouaynaya, Nidhal Carla
Delahaye, Daniel
Source :
Entropy. Feb2024, Vol. 26 Issue 2, p103. 18p.
Publication Year :
2024

Abstract

Despite their remarkable performance, deep learning models still lack robustness guarantees, particularly in the presence of adversarial examples. This significant vulnerability raises concerns about their trustworthiness and hinders their deployment in critical domains that require certified levels of robustness. In this paper, we introduce an information geometric framework to establish precise robustness criteria for l 2 white-box attacks in a multi-class classification setting. We endow the output space with the Fisher information metric and derive criteria on the input–output Jacobian to ensure robustness. We show that model robustness can be achieved by constraining the model to be partially isometric around the training points. We evaluate our approach using MNIST and CIFAR-10 datasets against adversarial attacks, revealing its substantial improvements over defensive distillation and Jacobian regularization for medium-sized perturbations and its superior robustness performance to adversarial training for large perturbations, all while maintaining the desired accuracy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10994300
Volume :
26
Issue :
2
Database :
Academic Search Index
Journal :
Entropy
Publication Type :
Academic Journal
Accession number :
175648204
Full Text :
https://doi.org/10.3390/e26020103