Back to Search Start Over

CosyVoice: A Scalable Multilingual Zero-shot Text-to-speech Synthesizer based on Supervised Semantic Tokens

Authors :
Du, Zhihao
Chen, Qian
Zhang, Shiliang
Hu, Kai
Lu, Heng
Yang, Yexin
Hu, Hangrui
Zheng, Siqi
Gu, Yue
Ma, Ziyang
Gao, Zhifu
Yan, Zhijie
Publication Year :
2024

Abstract

Recent years have witnessed a trend that large language model (LLM) based text-to-speech (TTS) emerges into the mainstream due to their high naturalness and zero-shot capacity. In this paradigm, speech signals are discretized into token sequences, which are modeled by an LLM with text as prompts and reconstructed by a token-based vocoder to waveforms. Obviously, speech tokens play a critical role in LLM-based TTS models. Current speech tokens are learned in an unsupervised manner, which lacks explicit semantic information and alignment to the text. In this paper, we propose to represent speech with supervised semantic tokens, which are derived from a multilingual speech recognition model by inserting vector quantization into the encoder. Based on the tokens, we further propose a scalable zero-shot TTS synthesizer, CosyVoice, which consists of an LLM for text-to-token generation and a conditional flow matching model for token-to-speech synthesis. Experimental results show that supervised semantic tokens significantly outperform existing unsupervised tokens in terms of content consistency and speaker similarity for zero-shot voice cloning. Moreover, we find that utilizing large-scale data further improves the synthesis performance, indicating the scalable capacity of CosyVoice. To the best of our knowledge, this is the first attempt to involve supervised speech tokens into TTS models.<br />Comment: work in progress. arXiv admin note: substantial text overlap with arXiv:2407.04051

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.05407
Document Type :
Working Paper