1. Joint Adversarial Example and False Data Injection Attacks for State Estimation in Power Systems
- Author
-
Buhong Wang, Mete Ozay, Kunrui Cao, Zhen Wang, Jing Li, and Jiwei Tian
- Subjects
State variable ,Computer science ,business.industry ,Deep learning ,Brachydactyly ,Detector ,Computer Science Applications ,Human-Computer Interaction ,Electric power system ,Adversarial system ,Computer engineering ,Control and Systems Engineering ,Key (cryptography) ,Humans ,Neural Networks, Computer ,State (computer science) ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Joint (audio engineering) ,Software ,Information Systems - Abstract
Although state estimation using a bad data detector (BDD) is a key procedure employed in power systems, the detector is vulnerable to false data injection attacks (FDIAs). Substantial deep learning methods have been proposed to detect such attacks. However, deep neural networks are susceptible to adversarial attacks or adversarial examples, where slight changes in inputs may lead to sharp changes in the corresponding outputs in even well-trained networks. This article introduces the joint adversarial example and FDIAs (AFDIAs) to explore various attack scenarios for state estimation in power systems. Considering that perturbations added directly to measurements are likely to be detected by BDDs, our proposed method of adding perturbations to state variables can guarantee that the attack is stealthy to BDDs. Then, malicious data that are stealthy to both BDDs and deep learning-based detectors can be generated. Theoretical and experimental results show that our proposed state-perturbation-based AFDIA method (S-AFDIA) can carry out attacks stealthy to both conventional BDDs and deep learning-based detectors, while our proposed measurement-perturbation-based adversarial FDIA method (M-AFDIA) succeeds if only deep learning-based detectors are used. The comparative experiments show that our proposed methods provide better performance than state-of-the-art methods. Besides, the ultimate effect of attacks can also be optimized using the proposed joint attack methods.
- Published
- 2022
- Full Text
- View/download PDF