Back to Search Start Over

Non-autoregressive Transformer-based End-to-end ASR using BERT

Authors :
Yu, Fu-Hao
Chen, Kuan-Yu
Source :
in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 1474-1482, 2022
Publication Year :
2021

Abstract

Transformer-based models have led to significant innovation in classical and practical subjects as varied as speech processing, natural language processing, and computer vision. On top of the Transformer, attention-based end-to-end automatic speech recognition (ASR) models have recently become popular. Specifically, non-autoregressive modeling, which boasts fast inference and performance comparable to conventional autoregressive methods, is an emerging research topic. In the context of natural language processing, the bidirectional encoder representations from Transformers (BERT) model has received widespread attention, partially due to its ability to infer contextualized word representations and to enable superior performance for downstream tasks while needing only simple fine-tuning. Motivated by the success, we intend to view speech recognition as a downstream task of BERT, thus an ASR system is expected to be deduced by performing fine-tuning. Consequently, to not only inherit the advantages of non-autoregressive ASR models but also enjoy the benefits of a pre-trained language model (e.g., BERT), we propose a non-autoregressive Transformer-based end-to-end ASR model based on BERT. We conduct a series of experiments on the AISHELL-1 dataset that demonstrate competitive or superior results for the model when compared to state-of-the-art ASR systems.

Details

Database :
arXiv
Journal :
in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 1474-1482, 2022
Publication Type :
Report
Accession number :
edsarx.2104.04805
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TASLP.2022.3166400