Back to Search Start Over

TransMask: A Compact and Fast Speech Separation Model Based on Transformer

Authors :
Zhang, Zining
He, Bingsheng
Zhang, Zhenjie
Publication Year :
2021

Abstract

Speech separation is an important problem in speech processing, which targets to separate and generate clean speech from a mixed audio containing speech from different speakers. Empowered by the deep learning technologies over sequence-to-sequence domain, recent neural speech separation models are now capable of generating highly clean speech audios. To make these models more practical by reducing the model size and inference time while maintaining high separation quality, we propose a new transformer-based speech separation approach, called TransMask. By fully un-leashing the power of self-attention on long-term dependency exception, we demonstrate the size of TransMask is more than 60% smaller and the inference is more than 2 times faster than state-of-the-art solutions. TransMask fully utilizes the parallelism during inference, and achieves nearly linear inference time within reasonable input audio lengths. It also outperforms existing solutions on output speech audio quality, achieving SDR above 16 over Librimix benchmark.<br />Comment: Accepted in ICASSP2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2102.09978
Document Type :
Working Paper