Back to Search Start Over

Towards Deep Learning Models Resistant to Transfer-based Adversarial Attacks via Data-centric Robust Learning

Authors :
Yang, Yulong
Lin, Chenhao
Ji, Xiang
Tian, Qiwei
Li, Qian
Yang, Hongshan
Wang, Zhibo
Shen, Chao
Yang, Yulong
Lin, Chenhao
Ji, Xiang
Tian, Qiwei
Li, Qian
Yang, Hongshan
Wang, Zhibo
Shen, Chao
Publication Year :
2023

Abstract

Transfer-based adversarial attacks raise a severe threat to real-world deep learning systems since they do not require access to target models. Adversarial training (AT), which is recognized as the strongest defense against white-box attacks, has also guaranteed high robustness to (black-box) transfer-based attacks. However, AT suffers from heavy computational overhead since it optimizes the adversarial examples during the whole training process. In this paper, we demonstrate that such heavy optimization is not necessary for AT against transfer-based attacks. Instead, a one-shot adversarial augmentation prior to training is sufficient, and we name this new defense paradigm Data-centric Robust Learning (DRL). Our experimental results show that DRL outperforms widely-used AT techniques (e.g., PGD-AT, TRADES, EAT, and FAT) in terms of black-box robustness and even surpasses the top-1 defense on RobustBench when combined with diverse data augmentations and loss regularizations. We also identify other benefits of DRL, for instance, the model generalization capability and robust fairness.<br />Comment: 9 pages

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438489350
Document Type :
Electronic Resource