Back to Search Start Over

A black-box adversarial attack strategy with adjustable sparsity and generalizability for deep image classifiers.

Authors :
Ghosh, Arka
Mullick, Sankha Subhra
Datta, Shounak
Das, Swagatam
Das, Asit Kr.
Mallipeddi, Rammohan
Source :
Pattern Recognition. Feb2022, Vol. 122, pN.PAG-N.PAG. 1p.
Publication Year :
2022

Abstract

• Presents a simple and efficient black-box adversarial attack strategy. • The adversarial perturbation can be dense or sparse. • Both universal and image dependent adversarial attack can be performed. • Employs a simple variant of Differential Evolution capable of optimizing the high dimensional problem under concern. Constructing adversarial perturbations for deep neural networks is an important direction of research. Crafting image-dependent adversarial perturbations using white-box feedback has hitherto been the norm for such adversarial attacks. However, black-box attacks are much more practical for real-world applications. Universal perturbations applicable across multiple images are gaining popularity due to their innate generalizability. There have also been efforts to restrict the perturbations to a few pixels in the image. This helps to retain visual similarity with the original images making such attacks hard to detect. This paper marks an important step that combines all these directions of research. We propose the DEceit algorithm for constructing effective universal pixel-restricted perturbations using only black-box feedback from the target network. We conduct empirical investigations using the ImageNet validation set on the state-of-the-art deep neural classifiers by varying the number of pixels to be perturbed from a meager 10 pixels to as high as all pixels in the image. We find that perturbing only about 10% of the pixels in an image using DEceit achieves a commendable and highly transferable Fooling Rate while retaining the visual quality. We further demonstrate that DEceit can be successfully applied to image-dependent attacks as well. In both sets of experiments, we outperform several state-of-the-art methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00313203
Volume :
122
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
153325155
Full Text :
https://doi.org/10.1016/j.patcog.2021.108279