1. Adversarial Robustness Guarantees for Quantum Classifiers
- Author
-
Dowling, Neil, West, Maxwell T., Southwell, Angus, Nakhl, Azar C., Sevior, Martin, Usman, Muhammad, and Modi, Kavan
- Subjects
Quantum Physics ,Condensed Matter - Statistical Mechanics ,Computer Science - Machine Learning ,Nonlinear Sciences - Chaotic Dynamics - Abstract
Despite their ever more widespread deployment throughout society, machine learning algorithms remain critically vulnerable to being spoofed by subtle adversarial tampering with their input data. The prospect of near-term quantum computers being capable of running {quantum machine learning} (QML) algorithms has therefore generated intense interest in their adversarial vulnerability. Here we show that quantum properties of QML algorithms can confer fundamental protections against such attacks, in certain scenarios guaranteeing robustness against classically-armed adversaries. We leverage tools from many-body physics to identify the quantum sources of this protection. Our results offer a theoretical underpinning of recent evidence which suggest quantum advantages in the search for adversarial robustness. In particular, we prove that quantum classifiers are: (i) protected against weak perturbations of data drawn from the trained distribution, (ii) protected against local attacks if they are insufficiently scrambling, and (iii) protected against universal adversarial attacks if they are sufficiently quantum chaotic. Our analytic results are supported by numerical evidence demonstrating the applicability of our theorems and the resulting robustness of a quantum classifier in practice. This line of inquiry constitutes a concrete pathway to advantage in QML, orthogonal to the usually sought improvements in model speed or accuracy., Comment: 9+12 pages, 3 figures. Comments welcome
- Published
- 2024