1. The Vulnerability of Semantic Segmentation Networks to Adversarial Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing
- Author
-
Nikhil Kapoor, Andreas Bär, Peter Schlicht, Tim Fingscheidt, Jonas Löhdefink, Serin Varghese, and Fabian Hüger
- Subjects
FOS: Computer and information sciences ,Computer science ,media_common.quotation_subject ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,sensors ,Machine learning ,computer.software_genre ,Convolutional neural network ,Field (computer science) ,Task (project management) ,Adversarial system ,cameras ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Electrical and Electronic Engineering ,image segmentation ,semantics ,media_common ,Vulnerability (computing) ,perturbation methods ,business.industry ,Applied Mathematics ,020206 networking & telecommunications ,Deception ,Signal Processing ,task analysis ,Artificial intelligence ,autonomous vehicles ,business ,computer - Abstract
Enabling autonomous driving (AD) can be considered one of the biggest challenges in today's technology. AD is a complex task accomplished by several functionalities, with environment perception being one of its core functions. Environment perception is usually performed by combining the semantic information captured by several sensors, i.e., lidar or camera. The semantic information from the respective sensor can be extracted by using convolutional neural networks (CNNs) for dense prediction. In the past, CNNs constantly showed state-of-the-art performance on several vision-related tasks, such as semantic segmentation of traffic scenes using nothing but the red-green-blue (RGB) images provided by a camera. Although CNNs obtain state-of-the-art performance on clean images, almost imperceptible changes to the input, referred to as adversarial perturbations, may lead to fatal deception. The goal of this article is to illuminate the vulnerability aspects of CNNs used for semantic segmentation with respect to adversarial attacks, and share insights into some of the existing known adversarial defense strategies. We aim to clarify the advantages and disadvantages associated with applying CNNs for environment perception in AD to serve as a motivation for future research in this field., Comment: IEEE Signal Processing Magazine (Volume: 38, Issue: 1, Jan. 2021), pp. 42 - 52
- Published
- 2021
- Full Text
- View/download PDF