Back to Search
Start Over
Generating Adversarial Point Clouds on Multi-modal Fusion Based 3D Object Detection Model
- Source :
- Information and Communications Security ISBN: 9783030868895, ICICS (1)
- Publication Year :
- 2021
- Publisher :
- Springer International Publishing, 2021.
-
Abstract
- In autonomous vehicles (AVs), a critical stage of perception system is to leverage multi-modal fusion (MMF) detectors which fuse data from LiDAR (Light Detection and Ranging) and camera sensors to perform 3D object detection. While single-modal (LiDAR-based and camera-based) models are found to be vulnerable to adversarial attacks, there are limited studies on the adversarial robustness of MMF models. Recent work has proposed a general spoofing attack on LiDAR-based perception, based on the defect of ignored occlusion patterns in point clouds. In this paper, we are inspired to attack LiDAR channel alone to fool the MMF model into detecting a fake near-front object with high confidence score. We perform the first study to analyze the roubustness of a popular MMF model against the above attack and discover it is invalid due to the correction of camera. We propose a black-box attack method to generate adversarial point clouds with few points and prove the defect still exists in MMF architecture. We evaluate the attack effectiveness of different combinations of points and distances and generate universal adversarial examples at the best distance of 4m, which achieve attack success rates of more than 95% and average confidence scores over 0.9 on the KITTI validation set when the points exceed 30. Furthermore, we verify the generality of our attack and the transferability of generated universal adversarial point clouds across models.
Details
- ISBN :
- 978-3-030-86889-5
- ISBNs :
- 9783030868895
- Database :
- OpenAIRE
- Journal :
- Information and Communications Security ISBN: 9783030868895, ICICS (1)
- Accession number :
- edsair.doi...........a2e4cade676e412e267eb935c6b1fd4c