1. AdvOps: Decoupling adversarial examples.
- Author
-
Wang, Donghua, Yao, Wen, Jiang, Tingsong, and Chen, Xiaoqian
- Subjects
- *
MATHEMATICAL decoupling , *EUCLIDEAN distance , *PREDICTION models , *MULTICASTING (Computer networks) - Abstract
Adversarial examples have a simple additive structure that the clean sample is added with delicate devised noise. Inspired by such an observation, we find that the prediction of the network on adversarial examples can also be decoupled into a simple additive structure, which is the sum of clean samples and adversarial perturbations in terms of the model prediction (called the decoupling principle). Thus, our findings can be served as a useful tool to gain insight into the underlying relationship between the inputs and the outputs of the model. However, although the adversarial examples generated by existing attack methods can satisfy the decoupling principle, the proportion is small. In this paper, we formulate the above issues as an optimization problem with multi-constrains, and we propose a generative model to generate adversarial examples that satisfy the decoupling principle and simultaneously obtain high attack performance. Specifically, we first adopt the adversarial loss to ensure the attack performance. Then, we devise a decouple loss to guarantee the decoupling principle. Moreover, we treat the Euclidean distances of perturbation as regularization terms to maintain visual quality. Extensive experiments against various networks on ImageNet and CIFAR10 show that the proposed method performs better than comparison methods in the comprehensive metric. Furthermore, transferability results suggested that adversarial examples that satisfy the decoupling principle show better transferability. • We find that the prediction of adversarial examples can be decoupled into the summation of clean samples and perturbations in terms of the model prediction, which can be a useful tool to gain insight into the underlying relationship between the inputs and the outputs. • We propose a generative model-based method to craft the adversarial perturbation, which satisfies the decoupling principle and simultaneously has superior attack performance. Moreover, the decouple loss is devised to guide the generative model for ensuring the decoupling principle. • We conduct extensive experiments against different networks on the complex ImageNet and simple CIFAR10 datasets. Experiment results suggest that the proposed method outperforms the comparison methods by a large margin in the devised metric that balance the attack performance and decoupling principle. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF