Sorry, I don't understand your search. ×
Back to Search Start Over

Self-regularised Minimum Latency Training for Streaming Transformer-based Speech Recognition

Authors :
Li, Mohan
Doddipatla, Rama
Zorila, Catalin
Publication Year :
2023

Abstract

This paper proposes a self-regularised minimum latency training (SR-MLT) method for streaming Transformer-based automatic speech recognition (ASR) systems. In previous works, latency was optimised by truncating the online attention weights based on the hard alignments obtained from conventional ASR models, without taking into account the potential loss of ASR accuracy. On the contrary, here we present a strategy to obtain the alignments as a part of the model training without external supervision. The alignments produced by the proposed method are dynamically regularised on the training data, such that the latency reduction does not result in the loss of ASR accuracy. SR-MLT is applied as a fine-tuning step on the pre-trained Transformer models that are based on either monotonic chunkwise attention (MoChA) or cumulative attention (CA) algorithms for online decoding. ASR experiments on the AIShell-1 and Librispeech datasets show that when applied on a decent pre-trained MoChA or CA baseline model, SR-MLT can effectively reduce the latency with the relative gains ranging from 11.8% to 39.5%. Furthermore, we also demonstrate that under certain accuracy levels, the models trained with SR-MLT can achieve lower latency when compared to those supervised using external hard alignments.<br />Comment: 5 pages, 2 figures, accepted at Interspeech2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.11985
Document Type :
Working Paper