Back to Search Start Over

End-to-End Lyrics Recognition with Self-supervised Learning

Authors :
Zhang, Xiangyu
Li, Shuyue Stella
He, Zhanhong
Togneri, Roberto
Garcia, Leibny Paola
Publication Year :
2022

Abstract

Lyrics recognition is an important task in music processing. Despite traditional algorithms such as the hybrid HMM- TDNN model achieving good performance, studies on applying end-to-end models and self-supervised learning (SSL) are limited. In this paper, we first establish an end-to-end baseline for lyrics recognition and then explore the performance of SSL models on lyrics recognition task. We evaluate a variety of upstream SSL models with different training methods (masked reconstruction, masked prediction, autoregressive reconstruction, and contrastive learning). Our end-to-end self-supervised models, evaluated on the DAMP music dataset, outperform the previous state-of-the-art (SOTA) system by 5.23% for the dev set and 2.4% for the test set even without a language model trained by a large corpus. Moreover, we investigate the effect of background music on the performance of self-supervised learning models and conclude that the SSL models cannot extract features efficiently in the presence of background music. Finally, we study the out-of-domain generalization ability of the SSL features considering that those models were not trained on music datasets.<br />Comment: 4 pages, 2 figures, 3 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2209.12702
Document Type :
Working Paper