Back to Search Start Over

Toward enriched decoding of mandarin spontaneous speech.

Authors :
Deng, Yu-Chih
Liao, Yuan-Fu
Wang, Yih-Ru
Chen, Sin-Horng
Source :
Speech Communication. Oct2023, Vol. 154, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

• Enriched decoding of spontaneous speech achieves better recognition performance. • Part-of-speech features help to reduce the perplexity of language model. • Hierarchical prosodic model enriches the recognition output with break type and prosodic state information. • Reduplication-word language model help to suppress outputting redundant words. A deep neural network (DNN)-based automatic speech recognition (ASR) method for enriched decoding of Mandarin spontaneous speech is proposed. It adopts an enhanced approach over the baseline model built with factored time delay neural networks (TDNN-f) and rescored with RNNLM to first building a baseline system composed of a TDNN-f acoustic model (AM), a trigram language model (LM), and a recurrent neural network language model (RNNLM) to generate a word lattice. It then sequentially incorporates a multi-task Part-of-Speech-RNNLM (POS-RNNLM), a hierarchical prosodic model (HPM), and a reduplication-word LM (RLM) into the decoding process by expanding the word lattice and performing rescoring to improve recognition performance and enrich the decoding output with syntactic parameters of POS and punctuation (PM), prosodic tags of word-juncture break types and syllable prosodic states, and an edited recognition text with reduplication words being eliminated. Experimental results on the Mandarin conversational dialogue corpus (MCDC) showed that SER, CER, and WER of 13.2 %, 13.9 %, and 19.1 % were achieved when incorporating the POS-RNNLM and HPM into the baseline system. They represented relative SER, CER, and WER reductions of 7.7 %, 7.9 % and 5.0 % as comparing with those of the baseline system. Futhermore, the use of the RLM resulted in additional 3 %, 4.6 %, and 4.5 % relative SER, CER, and WER reductions through eliminating reduplication words. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01676393
Volume :
154
Database :
Academic Search Index
Journal :
Speech Communication
Publication Type :
Academic Journal
Accession number :
172809738
Full Text :
https://doi.org/10.1016/j.specom.2023.102983