Back to Search Start Over

Improving Whisper's Recognition Performance for Under-Represented Language Kazakh Leveraging Unpaired Speech and Text

Authors :
Li, Jinpeng
Pu, Yu
Sun, Qi
Zhang, Wei-Qiang
Publication Year :
2024

Abstract

Whisper and other large-scale automatic speech recognition models have made significant progress in performance. However, their performance on many low-resource languages, such as Kazakh, is not satisfactory. It is worth researching how to utilize low-cost data to improve the performance of Whisper on under-represented languages. In this study, we utilized easily accessible unpaired speech and text data and combined the language model GPT with Whisper on Kazakh. We implemented end of transcript (EOT) judgment modification and hallucination penalty to improve the performance of speech recognition. Further, we employed the decoding average token log probability as a criterion to select samples from unlabeled speech data and used pseudo-labeled data to fine-tune the model to further improve its performance. Ultimately, we achieved more than 10\% absolute WER reduction in multiple experiments, and the whole process has the potential to be generalized to other under-represented languages.<br />Comment: Accepted by INTERSPEECH 2024;Minor typo correction

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.05554
Document Type :
Working Paper