Back to Search Start Over

MoLE : Mixture of Language Experts for Multi-Lingual Automatic Speech Recognition

Authors :
Kwon, Yoohwan
Chung, Soo-Whan
Publication Year :
2023

Abstract

Multi-lingual speech recognition aims to distinguish linguistic expressions in different languages and integrate acoustic processing simultaneously. In contrast, current multi-lingual speech recognition research follows a language-aware paradigm, mainly targeted to improve recognition performance rather than discriminate language characteristics. In this paper, we present a multi-lingual speech recognition network named Mixture-of-Language-Expert(MoLE), which digests speech in a variety of languages. Specifically, MoLE analyzes linguistic expression from input speech in arbitrary languages, activating a language-specific expert with a lightweight language tokenizer. The tokenizer not only activates experts, but also estimates the reliability of the activation. Based on the reliability, the activated expert and the language-agnostic expert are aggregated to represent language-conditioned embedding for efficient speech recognition. Our proposed model is evaluated in 5 languages scenario, and the experimental results show that our structure is advantageous on multi-lingual recognition, especially for speech in low-resource language.<br />Comment: Accepted by ICASSP 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2302.13750
Document Type :
Working Paper