Back to Search Start Over

Autoregressive Speech Synthesis without Vector Quantization

Authors :
Meng, Lingwei
Zhou, Long
Liu, Shujie
Chen, Sanyuan
Han, Bing
Hu, Shujie
Liu, Yanqing
Li, Jinyu
Zhao, Sheng
Wu, Xixin
Meng, Helen
Wei, Furu
Publication Year :
2024

Abstract

We present MELLE, a novel continuous-valued tokens based language modeling approach for text to speech synthesis (TTS). MELLE autoregressively generates continuous mel-spectrogram frames directly from text condition, bypassing the need for vector quantization, which are originally designed for audio compression and sacrifice fidelity compared to mel-spectrograms. Specifically, (i) instead of cross-entropy loss, we apply regression loss with a proposed spectrogram flux loss function to model the probability distribution of the continuous-valued tokens. (ii) we have incorporated variational inference into MELLE to facilitate sampling mechanisms, thereby enhancing the output diversity and model robustness. Experiments demonstrate that, compared to the two-stage codec language models VALL-E and its variants, the single-stage MELLE mitigates robustness issues by avoiding the inherent flaws of sampling discrete codes, achieves superior performance across multiple metrics, and, most importantly, offers a more streamlined paradigm. See https://aka.ms/melle for demos of our work.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.08551
Document Type :
Working Paper