Back to Search
Start Over
Using Synthetic Audio to Improve The Recognition of Out-Of-Vocabulary Words in End-To-End ASR Systems
- Source :
- ICASSP
- Publication Year :
- 2020
-
Abstract
- Today, many state-of-the-art automatic speech recognition (ASR) systems apply all-neural models that map audio to word sequences trained end-to-end along one global optimisation criterion in a fully data driven fashion. These models allow high precision ASR for domains and words represented in the training material but have difficulties recognising words that are rarely or not at all represented during training, i.e. trending words and new named entities. In this paper, we use a text-to-speech (TTS) engine to provide synthetic audio for out-of-vocabulary (OOV) words. We aim to boost the recognition accuracy of a recurrent neural network transducer (RNN-T) on OOV words by using the extra audio-text pairs, while maintaining the performance on the non-OOV words. Different regularisation techniques are explored and the best performance is achieved by fine-tuning the RNN-T on both original training data and extra synthetic data with elastic weight consolidation (EWC) applied on the encoder. This yields a 57% relative word error rate (WER) reduction on utterances containing OOV words without any degradation on the whole test set.<br />To appear in Proc. ICASSP2021, June 06-11, 2021, Toronto, Ontario, Canada
- Subjects :
- FOS: Computer and information sciences
Sound (cs.SD)
Computer science
Speech recognition
Word error rate
Synthetic data
Computer Science - Sound
Data-driven
Reduction (complexity)
Recurrent neural network
Audio and Speech Processing (eess.AS)
Test set
FOS: Electrical engineering, electronic engineering, information engineering
Encoder
Word (computer architecture)
Electrical Engineering and Systems Science - Audio and Speech Processing
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Journal :
- ICASSP
- Accession number :
- edsair.doi.dedup.....e72c9573d7b735caf39a3d67f9510a72