Back to Search Start Over

Earnings-21: A Practical Benchmark for ASR in the Wild

Authors :
Del Rio, Miguel
Delworth, Natalie
Westerman, Ryan
Huang, Michelle
Bhandari, Nishchal
Palakapilly, Joseph
McNamara, Quinten
Dong, Joshua
Zelasko, Piotr
Jette, Miguel
Publication Year :
2021

Abstract

Commonly used speech corpora inadequately challenge academic and commercial ASR systems. In particular, speech corpora lack metadata needed for detailed analysis and WER measurement. In response, we present Earnings-21, a 39-hour corpus of earnings calls containing entity-dense speech from nine different financial sectors. This corpus is intended to benchmark ASR systems in the wild with special attention towards named entity recognition. We benchmark four commercial ASR models, two internal models built with open-source tools, and an open-source LibriSpeech model and discuss their differences in performance on Earnings-21. Using our recently released fstalign tool, we provide a candid analysis of each model's recognition capabilities under different partitions. Our analysis finds that ASR accuracy for certain NER categories is poor, presenting a significant impediment to transcript comprehension and usage. Earnings-21 bridges academic and commercial ASR system evaluation and enables further research on entity modeling and WER on real world audio.<br />Comment: Accepted to INTERSPEECH 2021. June 15 2021: Addressing the comments of reviewers and updating the results of our internal ESPNet model. The results do not change our conclusions. April 28th, 2021: We found and resolved an issue in our experimental evaluation that scored the LibriSpeech model at ~20% worse relative WER than the actual WER. The updated results do not affect our conclusions

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2104.11348
Document Type :
Working Paper
Full Text :
https://doi.org/10.21437/Interspeech.2021-1915