Back to Search Start Over

ASCL: Adversarial supervised contrastive learning for defense against word substitution attacks.

Authors :
Shi, Jiahui
Li, Linjing
Zeng, Daniel
Source :
Neurocomputing. Oct2022, Vol. 510, p59-68. 10p.
Publication Year :
2022

Abstract

Attacks with adversarial examples can tremendously worsen the performance of deep neural networks (DNNs). Hence, defending against such adversarial attacks is crucial for nearly all DNN-based applications. Adversarial training is an effective and extensively adopted approach for increasing the robustness of DNNs in which benign examples and their adversarial counterparts are considered together in the training stage. However, this may result in a decrease in accuracy on benign examples because it does not account for the inter-class distance of benign examples. To overcome the aforementioned dilemma, we devise a novel defense approach named adversarial supervised contrastive learning (ASCL), which combines adversarial training with supervised contrastive learning to enhance the robustness of DNN-based models while maintaining their clean accuracy. We validate the effectiveness of the proposed ASCL approach in the scenario of defending against word substitution attacks by means of extensive experiments on benchmark tasks and datasets. The experimental results show that ASCL reduces the attack success rate to 20% while maintaining the accuracy for clean inputs within a 2% margin. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
510
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
159329173
Full Text :
https://doi.org/10.1016/j.neucom.2022.09.032