Back to Search Start Over

Adversarial Example Detection and Restoration Defensive Framework for Signal Intelligent Recognition Networks.

Authors :
Han, Chao
Qin, Ruoxi
Wang, Linyuan
Cui, Weijia
Li, Dongyang
Yan, Bin
Source :
Applied Sciences (2076-3417); Nov2023, Vol. 13 Issue 21, p11880, 22p
Publication Year :
2023

Abstract

Deep learning-based automatic modulation recognition networks are susceptible to adversarial attacks, posing significant performance vulnerabilities. In response, we introduce a defense framework enriched by tailored autoencoder (AE) techniques. Our design features a detection AE that harnesses reconstruction errors and convolutional neural networks to discern deep features, employing thresholds from reconstruction error and Kullback–Leibler divergence to identify adversarial samples and their origin mechanisms. Additionally, a restoration AE with a multi-layered structure effectively restores adversarial samples generated via optimization methods, ensuring accurate classification. Tested rigorously on the RML2016.10a dataset, our framework proves robust against adversarial threats, presenting a versatile defense solution compatible with various deep learning models. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20763417
Volume :
13
Issue :
21
Database :
Complementary Index
Journal :
Applied Sciences (2076-3417)
Publication Type :
Academic Journal
Accession number :
173566777
Full Text :
https://doi.org/10.3390/app132111880