Back to Search Start Over

Data Poisoning and Leakage Analysis in Federated Learning

Authors :
Wei, Wenqi
Huang, Tiansheng
Yahn, Zachary
Singhal, Anoop
Loper, Margaret
Liu, Ling
Publication Year :
2024

Abstract

Data poisoning and leakage risks impede the massive deployment of federated learning in the real world. This chapter reveals the truths and pitfalls of understanding two dominating threats: {\em training data privacy intrusion} and {\em training data poisoning}. We first investigate training data privacy threat and present our observations on when and how training data may be leaked during the course of federated training. One promising defense strategy is to perturb the raw gradient update by adding some controlled randomized noise prior to sharing during each round of federated learning. We discuss the importance of determining the proper amount of randomized noise and the proper location to add such noise for effective mitigation of gradient leakage threats against training data privacy. Then we will review and compare different training data poisoning threats and analyze why and when such data poisoning induced model Trojan attacks may lead to detrimental damage on the performance of the global model. We will categorize and compare representative poisoning attacks and the effectiveness of their mitigation techniques, delivering an in-depth understanding of the negative impact of data poisoning. Finally, we demonstrate the potential of dynamic model perturbation in simultaneously ensuring privacy protection, poisoning resilience, and model performance. The chapter concludes with a discussion on additional risk factors in federated learning, including the negative impact of skewness, data and algorithmic biases, as well as misinformation in training data. Powered by empirical evidence, our analytical study offers some transformative insights into effective privacy protection and security assurance strategies in attack-resilient federated learning.<br />Comment: Chapter of Handbook of Trustworthy Federated Learning

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.13004
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/978-3-031-58923-2_3