1. Playing Against Deep-Neural-Network-Based Object Detectors: A Novel Bidirectional Adversarial Attack Approach
- Author
-
Chenglin Liu, Shen Yin, Shaochong Liu, Xiang Li, Hao Luo, and Yuchen Jiang
- Subjects
Adversarial system ,Class (computer programming) ,Similarity (geometry) ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,Artificial intelligence ,business ,Object (computer science) ,Autoencoder ,Object detection - Abstract
In the fields of deep learning and computer vision,the security of object detection models has received extensive attention. Revealing the security vulnerabilities resulting from adversarial attacks has become one of the most important research directions. Existing studies show that object detection models can also be threatened by adversarial examples,just like other deep neural networks based models,e.g.,those for classification. In this paper,we propose a bidirectional adversarial attack method. Firstly,the added perturbation pushes the prediction results given by the object detectors far away from the groundtruth class while getting close to the background class. Secondly,a condence loss function is designed for the region proposal network to reduce the foreground scores. Thirdly,the adversarial examples are generated by a pre-trained autoencoder,and the model is trained using an adversarial approach,which can enhance the similarity between the adversarial examples and the original image and speed up algorithm convergence. The proposed method was verified on the most popular two-stage detection framework (Faster R-CNN),and 55.1% mAP-drop were obtained. In addition,the adversarial examples have superior transferability,migrating which to the common one-stage detection framework (YOLOv3) gets a 39.5% mAP-drop.
- Published
- 2022