Back to Search Start Over

Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers

Authors :
Wang, Chengyi
Chen, Sanyuan
Wu, Yu
Zhang, Ziqiang
Zhou, Long
Liu, Shujie
Chen, Zhuo
Liu, Yanqing
Wang, Huaming
Li, Jinyu
He, Lei
Zhao, Sheng
Wei, Furu
Publication Year :
2023

Abstract

We introduce a language modeling approach for text to speech synthesis (TTS). Specifically, we train a neural codec language model (called Vall-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems. Vall-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt. Experiment results show that Vall-E significantly outperforms the state-of-the-art zero-shot TTS system in terms of speech naturalness and speaker similarity. In addition, we find Vall-E could preserve the speaker's emotion and acoustic environment of the acoustic prompt in synthesis. See https://aka.ms/valle for demos of our work.<br />Comment: Working in progress

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2301.02111
Document Type :
Working Paper