1. XMAM:X-raying models with a matrix to reveal backdoor attacks for federated learning
- Author
-
Jianyi Zhang, Fangjiao Zhang, Qichao Jin, Zhiqiang Wang, Xiaodong Lin, and Xiali Hei
- Subjects
Federated learning ,Backdoor attacks ,Aggregation methods ,Information technology ,T58.5-58.64 - Abstract
Federated Learning (FL), a burgeoning technology, has received increasing attention due to its privacy protection capability. However, the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks. Former researchers proposed several robust aggregation methods. Unfortunately, due to the hidden characteristic of backdoor attacks, many of these aggregation methods are unable to defend against backdoor attacks. What's more, the attackers recently have proposed some hiding methods that further improve backdoor attacks' stealthiness, making all the existing robust aggregation methods fail.To tackle the threat of backdoor attacks, we propose a new aggregation method, X-raying Models with A Matrix (XMAM), to reveal the malicious local model updates submitted by the backdoor attackers. Since we observe that the output of the Softmax layer exhibits distinguishable patterns between malicious and benign updates, unlike the existing aggregation algorithms, we focus on the Softmax layer's output in which the backdoor attackers are difficult to hide their malicious behavior. Specifically, like medical X-ray examinations, we investigate the collected local model updates by using a matrix as an input to get their Softmax layer's outputs. Then, we preclude updates whose outputs are abnormal by clustering. Without any training dataset in the server, the extensive evaluations show that our XMAM can effectively distinguish malicious local model updates from benign ones. For instance, when other methods fail to defend against the backdoor attacks at no more than 20% malicious clients, our method can tolerate 45% malicious clients in the black-box mode and about 30% in Projected Gradient Descent (PGD) mode. Besides, under adaptive attacks, the results demonstrate that XMAM can still complete the global model training task even when there are 40% malicious clients. Finally, we analyze our method's screening complexity and compare the real screening time with other methods. The results show that XMAM is about 10–10000 times faster than the existing methods.
- Published
- 2024
- Full Text
- View/download PDF