Back to Search
Start Over
Techniques for Adversarial Examples Threatening the Safety of Artificial Intelligence Based Systems
- Publication Year :
- 2019
-
Abstract
- Artificial intelligence is known as the most effective technological field for rapid developments shaping the future of the world. Even today, it is possible to see intense use of intelligence systems in all fields of the life. Although advantages of the Artificial Intelligence are widely observed, there is also a dark side employing efforts to design hacking oriented techniques against Artificial Intelligence. Thanks to such techniques, it is possible to trick intelligent systems causing directed results for unsuccessful outputs. That is critical for also cyber wars of the future as it is predicted that the wars will be done unmanned, autonomous intelligent systems. Moving from the explanations, objective of this study is to provide information regarding adversarial examples threatening the Artificial Intelligence and focus on details of some techniques, which are used for creating adversarial examples. Adversarial examples are known as training data, which can trick a Machine Learning technique to learn incorrectly about the target problem and cause an unsuccessful or maliciously directed intelligent system at the end. The study enables the readers to learn enough about details of recent techniques for creating adversarial examples.<br />Comment: International Science and Innovation Congress 2019, pp. 643-655, 13 pages, 10 figures
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1910.06907
- Document Type :
- Working Paper