Back to Search
Start Over
Backdoor Attacks on Federated Meta-Learning
- Publication Year :
- 2020
- Publisher :
- arXiv, 2020.
-
Abstract
- Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to poisoning backdoor attacks: a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this paper, we analyze the effects of backdoor attacks on federated meta-learning, where users train a model that can be adapted to different sets of output classes using only a few examples. While the ability to adapt could, in principle, make federated learning frameworks more robust to backdoor attacks (when new training examples are benign), we find that even 1-shot~attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, success and persistence of backdoor attacks are greatly reduced.<br />Comment: 13 pages, 19 figures, NeurIPS Workshop on Scalability, Privacy, and Security in Federated Learning (NeurIPS-SpicyFL), 2020
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Cryptography and Security
Computer Science - Distributed, Parallel, and Cluster Computing
Statistics - Machine Learning
Machine Learning (stat.ML)
Distributed, Parallel, and Cluster Computing (cs.DC)
Cryptography and Security (cs.CR)
Machine Learning (cs.LG)
Subjects
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....444ee75a4ab003eb6e1efdcd616f3ab9
- Full Text :
- https://doi.org/10.48550/arxiv.2006.07026