Back to Search Start Over

Evaluating Adversarial Attacks on Traffic Sign Classifiers beyond Standard Baselines

Authors :
Pavlitska, Svetlana
Müller, Leopold
Zöllner, J. Marius
Publication Year :
2024

Abstract

Adversarial attacks on traffic sign classification models were among the first successfully tried in the real world. Since then, the research in this area has been mainly restricted to repeating baseline models, such as LISA-CNN or GTSRB-CNN, and similar experiment settings, including white and black patches on traffic signs. In this work, we decouple model architectures from the datasets and evaluate on further generic models to make a fair comparison. Furthermore, we compare two attack settings, inconspicuous and visible, which are usually regarded without direct comparison. Our results show that standard baselines like LISA-CNN or GTSRB-CNN are significantly more susceptible than the generic ones. We, therefore, suggest evaluating new attacks on a broader spectrum of baselines in the future. Our code is available at \url{https://github.com/KASTEL-MobilityLab/attacks-on-traffic-sign-recognition/}.<br />Comment: Accepted for publication at ICMLA 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.09150
Document Type :
Working Paper