Back to Search Start Over

Transformer-based Models of Text Normalization for Speech Applications

Authors :
Ro, Jae Hun
Stahlberg, Felix
Wu, Ke
Kumar, Shankar
Ro, Jae Hun
Stahlberg, Felix
Wu, Ke
Kumar, Shankar
Publication Year :
2022

Abstract

Text normalization, or the process of transforming text into a consistent, canonical form, is crucial for speech applications such as text-to-speech synthesis (TTS). In TTS, the system must decide whether to verbalize "1995" as "nineteen ninety five" in "born in 1995" or as "one thousand nine hundred ninety five" in "page 1995". We present an experimental comparison of various Transformer-based sequence-to-sequence (seq2seq) models of text normalization for speech and evaluate them on a variety of datasets of written text aligned to its normalized spoken form. These models include variants of the 2-stage RNN-based tagging/seq2seq architecture introduced by Zhang et al. (2019), where we replace the RNN with a Transformer in one or more stages, as well as vanilla Transformers that output string representations of edit sequences. Of our approaches, using Transformers for sentence context encoding within the 2-stage model proved most effective, with the fine-tuned BERT encoder yielding the best performance.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333747055
Document Type :
Electronic Resource