Back to Search Start Over

Fed-RHLP: Enhancing Federated Learning with Random High-Local Performance Client Selection for Improved Convergence and Accuracy.

Authors :
Sittijuk, Pramote
Tamee, Kreangsak
Source :
Symmetry (20738994). Sep2024, Vol. 16 Issue 9, p1181. 14p.
Publication Year :
2024

Abstract

We introduce the random high-local performance client selection strategy, termed Fed-RHLP. This approach allows opportunities for higher-performance clients to contribute more significantly by updating and sharing their local models for global aggregation. Nevertheless, it also enables lower-performance clients to participate collaboratively based on their proportional representation determined by the probability of their local performance on the roulette wheel (RW). Improving symmetry in federated learning involves IID Data: symmetry is naturally present, making model updates easier to aggregate and Non-IID Data: asymmetries can impact performance and fairness. Solutions include data balancing, adaptive algorithms, and robust aggregation methods. Fed-RHLP enhances federated learning by allowing lower-performance clients to contribute based on their proportional representation, which is determined by their local performance. This fosters inclusivity and collaboration in both IID and Non-IID scenarios. In this work, through experiments, we demonstrate that Fed-RHLP offers accelerated convergence speed and improved accuracy in aggregating the final global model, effectively mitigating challenges posed by both IID and Non-IID Data distribution scenarios. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20738994
Volume :
16
Issue :
9
Database :
Academic Search Index
Journal :
Symmetry (20738994)
Publication Type :
Academic Journal
Accession number :
180009486
Full Text :
https://doi.org/10.3390/sym16091181