Back to Search Start Over

Vulnerabilities of Foundation Model Integrated Federated Learning Under Adversarial Threats

Authors :
Wu, Chen
Li, Xi
Wang, Jiaqi
Publication Year :
2024

Abstract

Federated Learning (FL) addresses critical issues in machine learning related to data privacy and security, yet suffering from data insufficiency and imbalance under certain circumstances. The emergence of foundation models (FMs) offers potential solutions to the limitations of existing FL frameworks, e.g., by generating synthetic data for model initialization. However, due to the inherent safety concerns of FMs, integrating FMs into FL could introduce new risks, which remains largely unexplored. To address this gap, we conduct the first investigation on the vulnerability of FM integrated FL (FM-FL) under adversarial threats. Based on a unified framework of FM-FL, we introduce a novel attack strategy that exploits safety issues of FM to compromise FL client models. Through extensive experiments with well-known models and benchmark datasets in both image and text domains, we reveal the high susceptibility of the FM-FL to this new threat under various FL configurations. Furthermore, we find that existing FL defense strategies offer limited protection against this novel attack approach. This research highlights the critical need for enhanced security measures in FL in the era of FMs.<br />Comment: Chen Wu and Xi Li are equal contribution. The corresponding author is Jiaqi Wang

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.10375
Document Type :
Working Paper