Back to Search Start Over

Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks.

Authors :
Garaev, Roman
Rasheed, Bader
Khan, Adil Mehmood
Source :
Algorithms; Apr2024, Vol. 17 Issue 4, p162, 15p
Publication Year :
2024

Abstract

Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by L ∞ -norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network's latent representations, (4) an analysis of networks' decision boundaries and (5) the use of equivalence of L 2 and L ∞ perturbation norm theories. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19994893
Volume :
17
Issue :
4
Database :
Complementary Index
Journal :
Algorithms
Publication Type :
Academic Journal
Accession number :
176878916
Full Text :
https://doi.org/10.3390/a17040162