Back to Search Start Over

Federated Learning While Providing Model as a Service: Joint Training and Inference Optimization

Authors :
Han, Pengchao
Wang, Shiqiang
Jiao, Yang
Huang, Jianwei
Publication Year :
2023

Abstract

While providing machine learning model as a service to process users' inference requests, online applications can periodically upgrade the model utilizing newly collected data. Federated learning (FL) is beneficial for enabling the training of models across distributed clients while keeping the data locally. However, existing work has overlooked the coexistence of model training and inference under clients' limited resources. This paper focuses on the joint optimization of model training and inference to maximize inference performance at clients. Such an optimization faces several challenges. The first challenge is to characterize the clients' inference performance when clients may partially participate in FL. To resolve this challenge, we introduce a new notion of age of model (AoM) to quantify client-side model freshness, based on which we use FL's global model convergence error as an approximate measure of inference performance. The second challenge is the tight coupling among clients' decisions, including participation probability in FL, model download probability, and service rates. Toward the challenges, we propose an online problem approximation to reduce the problem complexity and optimize the resources to balance the needs of model training and inference. Experimental results demonstrate that the proposed algorithm improves the average inference accuracy by up to 12%.<br />Comment: Accepted by IEEE International Conference on Computer Communications (INFOCOM) 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.12863
Document Type :
Working Paper