Back to Search Start Over

Transferable adversarial sample purification by expanding the purification space of diffusion models.

Authors :
Ji, Jun
Gao, Song
Zhou, Wei
Source :
Visual Computer. Feb2024, p1-13.
Publication Year :
2024

Abstract

Deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial samples and many powerful defense methods have been proposed to enhance the adversarial robustness of DNNs. However, these defenses often require adding regularization terms to the loss function or augmenting the training data, which often involves modification of the target model and increases computational consumption. In this paper, we propose a novel adversarial defense approach that leverages the diffusion model with a large purification space to purify potential adversarial samples, and introduce two training strategies termed PSPG and PDPG to defend against different attacks. Our method preprocesses adversarial examples before they are inputted into the target model, and thus can provide protection for DNNs in the inference phase. It does not require modifications to the target model and can protect even deployed models. Extensive experiments on CIFAR-10 and ImageNet demonstrate that our method has good accuracy and transferability, it can provide protection effectively for different models in various defense scenarios. Our code is available at: https://github.com/YNU-JI/PDPG. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01782789
Database :
Academic Search Index
Journal :
Visual Computer
Publication Type :
Academic Journal
Accession number :
175407831
Full Text :
https://doi.org/10.1007/s00371-023-03253-7