Back to Search Start Over

A Scalable Model Specialization Framework for Training and Inference using Submodels and its Application to Speech Model Personalization

Authors :
Biadsy, Fadi
Chen, Youzheng
Zhang, Xia
Rybakov, Oleg
Rosenberg, Andrew
Moreno, Pedro J.
Publication Year :
2022

Abstract

Model fine-tuning and adaptation have become a common approach for model specialization for downstream tasks or domains. Fine-tuning the entire model or a subset of the parameters using light-weight adaptation has shown considerable success across different specialization tasks. Fine-tuning a model for a large number of domains typically requires starting a new training job for every domain posing scaling limitations. Once these models are trained, deploying them also poses significant scalability challenges for inference for real-time applications. In this paper, building upon prior light-weight adaptation techniques, we propose a modular framework that enables us to substantially improve scalability for model training and inference. We introduce Submodels that can be quickly and dynamically loaded for on-the-fly inference. We also propose multiple approaches for training those Submodels in parallel using an embedding space in the same training job. We test our framework on an extreme use-case which is speech model personalization for atypical speech, requiring a Submodel for each user. We obtain 128x Submodel throughput with a fixed computation budget without a loss of accuracy. We also show that learning a speaker-embedding space can scale further and reduce the amount of personalization training data required per speaker.<br />Comment: Submitted to INTERSPEECH

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.12559
Document Type :
Working Paper
Full Text :
https://doi.org/10.21437/Interspeech.2022-10613