Back to Search
Start Over
FLMAAcBD: Defending against backdoors in Federated Learning via Model Anomalous Activation Behavior Detection.
- Source :
-
Knowledge-Based Systems . Apr2024, Vol. 289, pN.PAG-N.PAG. 1p. - Publication Year :
- 2024
-
Abstract
- Federated learning (FL) is susceptible to backdoor attacks, where malicious model updates are covertly inserted into the model's aggregation process, which can result in inaccurate predictions for certain inputs, compromising the integrity of FL. Existing defenses, which aim to identify and remove potentially poisoned model updates, may incorrectly exclude model updates from benign clients with heterogeneous data even in the absence of an attack, compromising the model's usability and resulting in unfair treatment of these benign clients. For defense approaches that utilize adaptive model parameter clipping and noise injection, the continuous injection of noise also considerably deteriorates the model's usability. To address the aforementioned issues, a novel defense named FLMAAcBD (F ederated L earning M odel A nomalous Ac tivation B ehavior D etection) is proposed, which adopts a backdoor anomaly detection module for the global model and a backdoor removal module for potentially malicious model updates to defend against backdoor attacks. Extensive experiments on three datasets validate that FLMAAcBD can defend against various backdoor attacks. Moreover, compared with the baseline methods, FLMAAcBD has less impact on the model's usability. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 09507051
- Volume :
- 289
- Database :
- Academic Search Index
- Journal :
- Knowledge-Based Systems
- Publication Type :
- Academic Journal
- Accession number :
- 175872766
- Full Text :
- https://doi.org/10.1016/j.knosys.2024.111511