1. Adversarial Example Soups: Improving Transferability and Stealthiness for Free
- Author
-
Yang, Bo, Zhang, Hengwei, Wang, Jindong, Yang, Yulong, Lin, Chenhao, Shen, Chao, Zhao, Zhengyu, Yang, Bo, Zhang, Hengwei, Wang, Jindong, Yang, Yulong, Lin, Chenhao, Shen, Chao, and Zhao, Zhengyu
- Abstract
Transferable adversarial examples cause practical security risks since they can mislead a target model without knowing its internal knowledge. A conventional recipe for maximizing transferability is to keep only the optimal adversarial example from all those obtained in the optimization pipeline. In this paper, for the first time, we question this convention and demonstrate that those discarded, sub-optimal adversarial examples can be reused to boost transferability. Specifically, we propose ``Adversarial Example Soups'' (AES), with AES-tune for averaging discarded adversarial examples in hyperparameter tuning and AES-rand for stability testing. In addition, our AES is inspired by ``model soups'', which averages weights of multiple fine-tuned models for improved accuracy without increasing inference time. Extensive experiments validate the global effectiveness of our AES, boosting 10 state-of-the-art transfer attacks and their combinations by up to 13% against 10 diverse (defensive) target models. We also show the possibility of generalizing AES to other types, e.g., directly averaging multiple in-the-wild adversarial examples that yield comparable success. A promising byproduct of AES is the improved stealthiness of adversarial examples since the perturbation variances are naturally reduced., Comment: Under review
- Published
- 2024