Back to Search Start Over

Are Transformers More Robust? Towards Exact Robustness Verification for Transformers

Authors :
Liao, Brian Hsuan-Cheng
Cheng, Chih-Hong
Esen, Hasan
Knoll, Alois
Publication Year :
2022

Abstract

As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving. In this paper, we study the robustness problem of Transformers, a key characteristic as low robustness may cause safety concerns. Specifically, we focus on Sparsemax-based Transformers and reduce the finding of their maximum robustness to a Mixed Integer Quadratically Constrained Programming (MIQCP) problem. We also design two pre-processing heuristics that can be embedded in the MIQCP encoding and substantially accelerate its solving. We then conduct experiments using the application of Land Departure Warning to compare the robustness of Sparsemax-based Transformers against that of the more conventional Multi-Layer-Perceptron (MLP) NNs. To our surprise, Transformers are not necessarily more robust, leading to profound considerations in selecting appropriate NN architectures for safety-critical domain applications.<br />Comment: Accepted at SafeComp 2023, 14 pages (Springer LNCS format), 3 figures, 2 tables, 2 algorithms

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2202.03932
Document Type :
Working Paper