Back to Search Start Over

End-to-End Multi-Speaker Speech Recognition

Authors :
Jonathan Le Roux
Shinji Watanabe
John R. Hershey
Shane Settle
Takaaki Hori
Source :
ICASSP
Publication Year :
2018
Publisher :
IEEE, 2018.

Abstract

Current advances in deep learning have resulted in a convergence of methods across a wide range of tasks, opening the door for tighter integration of modules that were previously developed and optimized in isolation. Recent ground-breaking works have produced end-to-end deep network methods for both speech separation and end-to-end automatic speech recognition (ASR). Speech separation methods such as deep clustering address the challenging cocktail-party problem of distinguishing multiple simultaneous speech signals. This is an enabling technology for real-world human machine interaction (HMI). However, speech separation requires ASR to interpret the speech for any HMI task. Likewise, ASR requires speech separation to work in an unconstrained environment. Although these two components can be trained in isolation and connected after the fact, this paradigm is likely to be sub-optimal, since it relies on artificially mixed data. In this paper, we develop the first fully end-to-end, jointly trained deep learning system for separation and recognition of overlapping speech signals. The joint training framework synergistically adapts the separation and recognition to each other. As an additional benefit, it enables training on more realistic data that contains only mixed signals and their transcriptions, and thus is suited to large scale training on existing transcribed data.

Details

Database :
OpenAIRE
Journal :
2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Accession number :
edsair.doi...........2e7b2fbb5c0c6b6bad9ef53ffb34116b
Full Text :
https://doi.org/10.1109/icassp.2018.8461893