Back to Search Start Over

Iterative Training Attack: A Black-Box Adversarial Attack via Perturbation Generative Network.

Authors :
Lei, Hong
Jiang, Wei
Zhan, Jinyu
You, Shen
Jin, Lingxin
Xie, Xiaona
Chang, Zhengwei
Source :
Journal of Circuits, Systems & Computers; 12/1/2023, Vol. 32 Issue 18, p1-20, 20p
Publication Year :
2023

Abstract

Deep neural networks are vulnerable to adversarial examples. While there are many methods for generating adversarial examples using neural networks, creating such examples with high perceptual quality and improved training remains an area of active research. In this paper, we propose the Iterative Training Attack (ITA), a black-box attack based on a perturbation generative network for generating adversarial examples. ITA generates such examples by randomly initializing the perturbation generative network multiple times, iteratively training and optimizing a refined loss function. Compared to other neural network-based attacks, our proposed method generates adversarial examples with higher attack rates and within a small perturbation range even when the advanced defense is employed. Despite being a black-box attack, ITA outperforms gradient-based white-box attacks even under basic standards. The authors evaluated their method on a TRADES robust model trained with the MNIST dataset and achieved a robust accuracy of 92.46%, the highest among the evaluated methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02181266
Volume :
32
Issue :
18
Database :
Complementary Index
Journal :
Journal of Circuits, Systems & Computers
Publication Type :
Academic Journal
Accession number :
175237807
Full Text :
https://doi.org/10.1142/S0218126623503140