Back to Search Start Over

Ultra Fast Speech Separation Model with Teacher Student Learning

Authors :
Chen, Sanyuan
Wu, Yu
Chen, Zhuo
Wu, Jian
Yoshioka, Takuya
Liu, Shujie
Li, Jinyu
Yu, Xiangzhan
Publication Year :
2022

Abstract

Transformer has been successfully applied to speech separation recently with its strong long-dependency modeling capacity using a self-attention mechanism. However, Transformer tends to have heavy run-time costs due to the deep encoder layers, which hinders its deployment on edge devices. A small Transformer model with fewer encoder layers is preferred for computational efficiency, but it is prone to performance degradation. In this paper, an ultra fast speech separation Transformer model is proposed to achieve both better performance and efficiency with teacher student learning (T-S learning). We introduce layer-wise T-S learning and objective shifting mechanisms to guide the small student model to learn intermediate representations from the large teacher model. Compared with the small Transformer model trained from scratch, the proposed T-S learning method reduces the word error rate (WER) by more than 5% for both multi-channel and single-channel speech separation on LibriCSS dataset. Utilizing more unlabeled speech data, our ultra fast speech separation models achieve more than 10% relative WER reduction.<br />Comment: Accepted by interspeech 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2204.12777
Document Type :
Working Paper
Full Text :
https://doi.org/10.21437/Interspeech.2021-142