Back to Search Start Over

BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning

Authors :
Liu, Yi
Wang, Cong
Yuan, Xingliang
Publication Year :
2024

Abstract

Federated Learning (FL) is susceptible to poisoning attacks, wherein compromised clients manipulate the global model by modifying local datasets or sending manipulated model updates. Experienced defenders can readily detect and mitigate the poisoning effects of malicious behaviors using Byzantine-robust aggregation rules. However, the exploration of poisoning attacks in scenarios where such behaviors are absent remains largely unexplored for Byzantine-robust FL. This paper addresses the challenging problem of poisoning Byzantine-robust FL by introducing catastrophic forgetting. To fill this gap, we first formally define generalization error and establish its connection to catastrophic forgetting, paving the way for the development of a clean-label data poisoning attack named BadSampler. This attack leverages only clean-label data (i.e., without poisoned data) to poison Byzantine-robust FL and requires the adversary to selectively sample training data with high loss to feed model training and maximize the model's generalization error. We formulate the attack as an optimization problem and present two elegant adversarial sampling strategies, Top-$\kappa$ sampling, and meta-sampling, to approximately solve it. Additionally, our formal error upper bound and time complexity analysis demonstrate that our design can preserve attack utility with high efficiency. Extensive evaluations on two real-world datasets illustrate the effectiveness and performance of our proposed attacks.<br />Comment: In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD' 24), August 25-29, 2024, Barcelona, Spain

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.12222
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3637528.3671879