1. The Security Threat of Adversarial Samples to Deep Learning Networks
- Author
-
Yilian Zhang, Binbin Wang, Yan Chen, and Minjie Zhu
- Subjects
Discriminator ,business.industry ,Computer science ,Deep learning ,Sample (statistics) ,Computer security ,computer.software_genre ,Semantics ,Image (mathematics) ,Adversarial system ,Robustness (computer science) ,Artificial intelligence ,business ,computer ,Vulnerability (computing) - Abstract
With the prosperity of artificial intelligence, research on machine learning becomes a hot issue globally. Generative Adversarial Networks expose the huge security risks of machine learning. After the creation, GAN has achieved good performance in image generation, automatic image coloring, and data enhancement. With the improvement of the ability to generate samples against the deep learning network, generating malicious samples against the target learning model to achieve the deceptive discriminator becomes an effective and harmful attacking method. At present, some efficient attack methods have been proposed for different types of learning networks and different types of sample data. This paper mainly discusses the vulnerability of deep learning networks and several attack methods based on adversarial samples.
- Published
- 2020