Back to Search Start Over

结合对抗训练和特征混合的孪生网络防御模型.

Authors :
张新君
程雨晴
Source :
Application Research of Computers / Jisuanji Yingyong Yanjiu. Mar2024, Vol. 41 Issue 3, p905-910. 6p.
Publication Year :
2024

Abstract

Neural network models are vulnerable to adversarial sample attacks. Aiming at the problem that current defense methods focus on improving the model structure or the model only uses the adversarial training method which leads to a single type of defense and impairs the model’s classification ability and inefficiency, the method of combining the adversarial training and the feature mixture to train the siamese neural network model (SS-ResNet18) is proposed.The method mixes the training set sample data by linear interpolation, builds a siamese network model using the residual attention module, and inputs PGD antagonistic samples and normal samples into different branches of the network for training. The input features are interchanged in the feature space between neighboring sample parts to enhance the network’s immunity to interference, combining the adversarial loss and the classification loss as the overall loss function of the network and smoothing it with labels. Experimented on CIFAR-10 and SVHN datasets, the method shows excellent defense performance under white-box attack, and the success rate of the model’s defense against antagonistic samples, such as PGD, JSMA, etc., under black-box attack is more than 80%; at the same time, the SS-ResNet18 model time spent is only one-half of the one-half of the subspace antagonistic training method. The experimental results show that the SS-ResNet18 model can defend against a variety of adversarial sample attacks, and is robust and less time-consuming to train compared to existing defense methods. [ABSTRACT FROM AUTHOR]

Details

Language :
Chinese
ISSN :
10013695
Volume :
41
Issue :
3
Database :
Academic Search Index
Journal :
Application Research of Computers / Jisuanji Yingyong Yanjiu
Publication Type :
Academic Journal
Accession number :
176137462
Full Text :
https://doi.org/10.19734/j.issn.1001-3695.2023.07.0318