Back to Search Start Over

AT-BOD: An Adversarial Attack on Fool DNN-Based Blackbox Object Detection Models.

Authors :
Elaalami, Ilham A.
Olatunji, Sunday O.
Zagrouba, Rachid M.
Source :
Applied Sciences (2076-3417); Feb2022, Vol. 12 Issue 4, p2003, 23p
Publication Year :
2022

Abstract

Object recognition is a fundamental concept in computer vision. Object detection models have recently played a vital role in various applications, including real-time and safety-critical systems such as camera surveillance and self-driving cars. Scientific research has proven that object detection models are prone to adversarial attacks. Although several proposed methods exist throughout the literature, they either target white-box models or specific-task black-box models and do not generalize on other detectors. In this paper, we proposed a new adversarial attack against Blackbox-based object detectors called AT-BOD. The proposed AT-BOD model can fool the single-stage and multi-stage detectors, where we used an optimization algorithm to generate adversarial examples depending only on the detector predictions. AT-BOD model works in two diverse ways, reducing the confidence score and misleading the model to make the wrong decision or hide the object detection models. Our solution achieved a fooling rate of 97% and a false negative increase of 99% on the YOLOv3 detector, and a fooling rate of 61% false-negative increase of 57% on the Faster R-CNN detector. The detection accuracy of YOLOv3 and Faster R-CNN under AT-BOD was dramatically reduced and reached ≤1% and ≤3%, respectively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20763417
Volume :
12
Issue :
4
Database :
Complementary Index
Journal :
Applied Sciences (2076-3417)
Publication Type :
Academic Journal
Accession number :
155714701
Full Text :
https://doi.org/10.3390/app12042003