Back to Search Start Over

Compound adversarial examples in deep neural networks.

Authors :
Li, Yanchun
Li, Zhetao
Zeng, Li
Long, Saiqin
Huang, Feiran
Ren, Kui
Source :
Information Sciences. Oct2022, Vol. 613, p50-68. 19p.
Publication Year :
2022

Abstract

• We propose two algorithms to optimize adversarial perturbation and patch. • We discover that CAEs can quickly decrease the classification accuracy. • We find that two weak attack patterns can be combined to perform stronger attack. • We demonstrate the effectiveness and robustness of the proposed method. Although deep learning has made great progress in many fields, they are still vulnerable to adversarial examples. Many methods for generating adversarial examples have been proposed, which either contain adversarial perturbation or patch. In this paper, we explore the method that creates compound adversarial examples including both perturbation and patch. We show that fusing two weak attack modes can produce more powerful adversarial examples, where the patch covers only 1 % of the pixels at random location in the image, and the perturbation changes only by 2/255 in the original pixel value (scale to 0–1). For both targeted attack and untargeted attack, compound attack can improve the generative efficiency of adversarial examples, and can attain higher attack success rate with fewer iteration steps. The compound adversarial examples successfully attack the models with defensive mechanisms that previously can defend perturbation attack or patch attack. Furthermore, the compound adversarial examples show good transferability on normal trained classifiers and adversarial trained classifiers. Experimental results on a series of widely used classifiers and defense models show that the proposed compound adversarial examples have strong robustness, high effectiveness, and good transferability. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00200255
Volume :
613
Database :
Academic Search Index
Journal :
Information Sciences
Publication Type :
Periodical
Accession number :
159928177
Full Text :
https://doi.org/10.1016/j.ins.2022.08.031