1. Improving the robustness of adversarial attacks using an affine-invariant gradient estimator.
- Author
-
Xiang, Wenzhao, Su, Hang, Liu, Chang, Guo, Yandong, and Zheng, Shibao
- Subjects
ARTIFICIAL neural networks ,EUCLIDEAN geometry ,ANALYTIC geometry ,PLANE geometry ,AFFINE transformations ,STATISTICS - Abstract
As designers of artificial intelligence try to outwit hackers, both sides continue to hone in on AI's inherent vulnerabilities. Designed and trained from certain statistical distributions of data, deep neural networks (DNNs) remain vulnerable to deceptive inputs that violate a DNN's statistical, predictive assumptions. Before being fed into a neural network, however, most existing adversarial examples cannot maintain malicious functionality when applied to an affine transformation. For practical purposes, maintaining that malicious functionality serves as an important measure of the robustness of adversarial attacks. To help DNNs learn to defend themselves more thoroughly against attacks, we propose an affine-invariant adversarial attack, which can consistently produce more robust adversarial examples over affine transformations. For efficiency, we propose to disentangle current affine-transformation strategies from the Euclidean geometry coordinate plane with its geometric translations, rotations and dilations; we reformulate the latter two in polar coordinates. Afterwards, we construct an affine-invariant gradient estimator by convolving the gradient at the original image with derived kernels, which can be integrated with any gradient-based attack methods. Extensive experiments on ImageNet, including some experiments under physical condition, demonstrate that our method can significantly improve the affine invariance of adversarial examples and, as a byproduct, improve the transferability of adversarial examples, compared with alternative state-of-the-art methods. • We introduce an attack framework to improve the affine invariance of attack. • We design a gradient estimator to improve the efficiency of our algorithm. • Physical experiments show the robustness of our attack to complex transformations. • Our attack can serve as a good initialization for the query-based black-box attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF