Back to Search Start Over

FedSAC: Dynamic Submodel Allocation for Collaborative Fairness in Federated Learning

Authors :
Wang, Zihui
Wang, Zheng
Lyu, Lingjuan
Peng, Zhaopeng
Yang, Zhicheng
Wen, Chenglu
Yu, Rongshan
Wang, Cheng
Fan, Xiaoliang
Publication Year :
2024

Abstract

Collaborative fairness stands as an essential element in federated learning to encourage client participation by equitably distributing rewards based on individual contributions. Existing methods primarily focus on adjusting gradient allocations among clients to achieve collaborative fairness. However, they frequently overlook crucial factors such as maintaining consistency across local models and catering to the diverse requirements of high-contributing clients. This oversight inevitably decreases both fairness and model accuracy in practice. To address these issues, we propose FedSAC, a novel Federated learning framework with dynamic Submodel Allocation for Collaborative fairness, backed by a theoretical convergence guarantee. First, we present the concept of "bounded collaborative fairness (BCF)", which ensures fairness by tailoring rewards to individual clients based on their contributions. Second, to implement the BCF, we design a submodel allocation module with a theoretical guarantee of fairness. This module incentivizes high-contributing clients with high-performance submodels containing a diverse range of crucial neurons, thereby preserving consistency across local models. Third, we further develop a dynamic aggregation module to adaptively aggregate submodels, ensuring the equitable treatment of low-frequency neurons and consequently enhancing overall model accuracy. Extensive experiments conducted on three public benchmarks demonstrate that FedSAC outperforms all baseline methods in both fairness and model accuracy. We see this work as a significant step towards incentivizing broader client participation in federated learning. The source code is available at https://github.com/wangzihuixmu/FedSAC.<br />Comment: Accepted by KDD'24

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.18291
Document Type :
Working Paper