Back to Search Start Over

AudioGuard: Speech Recognition System Robust against Optimized Audio Adversarial Examples.

Authors :
Kwon, Hyun
Source :
Multimedia Tools & Applications; Jun2024, Vol. 83 Issue 20, p57943-57962, 20p
Publication Year :
2024

Abstract

Deep neural networks provide good performance in image recognition, voice recognition, pattern recognition, and intrusion detection. However, deep neural networks are vulnerable to adversarial examples. Adversarial examples are samples that are created by adding a small amount of noise to normal data in such a way that they are recognized as normal by humans but are misclassified by a target model. In this paper, we propose a method for defending against audio adversarial examples using a noise vector, without the need for a separate module or process. The proposed method correctly identifies adversarial examples while maintaining the model's accuracy on normal samples by using a noise vector. In our experiments, the Mozilla Common Voice dataset was used as test data, with TensorFlow as the machine learning library. The experimental results showed that the proposed method correctly identified the adversarial examples with 84.2% accuracy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13807501
Volume :
83
Issue :
20
Database :
Complementary Index
Journal :
Multimedia Tools & Applications
Publication Type :
Academic Journal
Accession number :
177623233
Full Text :
https://doi.org/10.1007/s11042-023-15961-2