Back to Search Start Over

VioLA: Unified Codec Language Models for Speech Recognition, Synthesis, and Translation

Authors :
Wang, Tianrui
Zhou, Long
Zhang, Ziqiang
Wu, Yu
Liu, Shujie
Gaur, Yashesh
Chen, Zhuo
Li, Jinyu
Wei, Furu
Publication Year :
2023

Abstract

Recent research shows a big convergence in model architecture, training objectives, and inference methods across various tasks for different modalities. In this paper, we propose VioLA, a single auto-regressive Transformer decoder-only network that unifies various cross-modal tasks involving speech and text, such as speech-to-text, text-to-text, text-to-speech, and speech-to-speech tasks, as a conditional codec language model task via multi-task learning framework. To accomplish this, we first convert all the speech utterances to discrete tokens (similar to the textual data) using an offline neural codec encoder. In such a way, all these tasks are converted to token-based sequence conversion problems, which can be naturally handled with one conditional language model. We further integrate task IDs (TID) and language IDs (LID) into the proposed model to enhance the modeling capability of handling different languages and tasks. Experimental results demonstrate that the proposed VioLA model can support both single-modal and cross-modal tasks well, and the decoder-only model achieves a comparable and even better performance than the strong baselines.<br />Comment: Working in progress

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.16107
Document Type :
Working Paper