201. Bandit-based data poisoning attack against federated learning for autonomous driving models.
- Author
-
Wang, Shuo, Li, Qianmu, Cui, Zhiyong, Hou, Jun, and Huang, Chanying
- Subjects
- *
FEDERATED learning , *DATA privacy , *NONLINEAR regression , *NONLINEAR functions , *TRAFFIC safety , *AUTONOMOUS vehicles , *DRIVERLESS cars - Abstract
In Internet of Things (IoT) applications, federated learning is commonly used for distributedly training models in a privacy-preserving manner. Recently, federated learning is broadly applied to autonomous driving for training intelligent decision models without disseminating local data remotely. Although federated learning provides a safer training manner for protecting data privacy in autonomous driving, the model training process is still vulnerable to poisoning attacks from vehicle client ends. It is beneficial to study poisoning attacks for enhancing the robustness of the training process to generate reliable decisions for safe driving. Until now, a few researches on poisoning attacks against classification models under federated learning scenarios have been proposed. However, those poisoning attacks against classification tasks cannot be directly applied to regression tasks in a federated learning framework, especially autonomous driving tasks such as steering angle control and brake control. The biggest challenge is that the output of non-linear regression models in federated learning is a dynamic sequential value decided by an online updated non-linear function. Thus, minor attacks can affect the overall non-linear function inference outputs leading to a failed stealthy attack. Based on existing challenges, this paper proposes an ATT ack against F ederated L earning based A utonomous V ehicle framework(ATT_FLAV) to evaluate and enhance the robustness of the federated learning-based autonomous driving models and take the steering angle control task as a representative non-linear regression task to illustrate the methodology. In the proposed framework, a bandit-based A ttack R egion- UCB (AR-UCB) algorithm is designed for dynamic data poisoning attacks against the non-linear regression model. This is a black-box attack strategy that chooses the target attack label region dynamically in each federated learning round based on historical attack experiences. Compared with the attack performance of baseline poisoning attacks and the robustness under defense schemas, the proposed poisoning attack strategy can achieve superior attack performance via continuous data poisoning attacks against the federated learning framework. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF