Back to Search Start Over

Dynamic Label Adversarial Training for Deep Learning Robustness Against Adversarial Attacks

Authors :
Liu, Zhenyu
Duan, Haoran
Liang, Huizhi
Long, Yang
Snasel, Vaclav
Nicosia, Guiseppe
Ranjan, Rajiv
Ojha, Varun
Source :
31st International Conference on Neural Information Processing (ICONIP), 2024
Publication Year :
2024

Abstract

Adversarial training is one of the most effective methods for enhancing model robustness. Recent approaches incorporate adversarial distillation in adversarial training architectures. However, we notice two scenarios of defense methods that limit their performance: (1) Previous methods primarily use static ground truth for adversarial training, but this often causes robust overfitting; (2) The loss functions are either Mean Squared Error or KL-divergence leading to a sub-optimal performance on clean accuracy. To solve those problems, we propose a dynamic label adversarial training (DYNAT) algorithm that enables the target model to gradually and dynamically gain robustness from the guide model's decisions. Additionally, we found that a budgeted dimension of inner optimization for the target model may contribute to the trade-off between clean accuracy and robust accuracy. Therefore, we propose a novel inner optimization method to be incorporated into the adversarial training. This will enable the target model to adaptively search for adversarial examples based on dynamic labels from the guiding model, contributing to the robustness of the target model. Extensive experiments validate the superior performance of our approach.

Details

Database :
arXiv
Journal :
31st International Conference on Neural Information Processing (ICONIP), 2024
Publication Type :
Report
Accession number :
edsarx.2408.13102
Document Type :
Working Paper