Back to Search Start Over

A Survey on Transferability of Adversarial Examples across Deep Neural Networks

Authors :
Gu, Jindong
Jia, Xiaojun
de Jorge, Pau
Yu, Wenqain
Liu, Xinwei
Ma, Avery
Xun, Yuan
Hu, Anjun
Khakzar, Ashkan
Li, Zhijiang
Cao, Xiaochun
Torr, Philip
Gu, Jindong
Jia, Xiaojun
de Jorge, Pau
Yu, Wenqain
Liu, Xinwei
Ma, Avery
Xun, Yuan
Hu, Anjun
Khakzar, Ashkan
Li, Zhijiang
Cao, Xiaochun
Torr, Philip
Publication Year :
2023

Abstract

The emergence of Deep Neural Networks (DNNs) has revolutionized various domains by enabling the resolution of complex tasks spanning image recognition, natural language processing, and scientific problem-solving. However, this progress has also brought to light a concerning vulnerability: adversarial examples. These crafted inputs, imperceptible to humans, can manipulate machine learning models into making erroneous predictions, raising concerns for safety-critical applications. An intriguing property of this phenomenon is the transferability of adversarial examples, where perturbations crafted for one model can deceive another, often with a different architecture. This intriguing property enables black-box attacks which circumvents the need for detailed knowledge of the target model. This survey explores the landscape of the adversarial transferability of adversarial examples. We categorize existing methodologies to enhance adversarial transferability and discuss the fundamental principles guiding each approach. While the predominant body of research primarily concentrates on image classification, we also extend our discussion to encompass other vision tasks and beyond. Challenges and opportunities are discussed, highlighting the importance of fortifying DNNs against adversarial vulnerabilities in an evolving landscape.<br />Comment: Accepted to Transactions on Machine Learning Research (TMLR)

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438493350
Document Type :
Electronic Resource