Back to Search Start Over

Investigating Pre-trained Audio Encoders in the Low-Resource Condition

Authors :
Yang, Hao
Zhao, Jinming
Haffari, Gholamreza
Shareghi, Ehsan
Publication Year :
2023

Abstract

Pre-trained speech encoders have been central to pushing state-of-the-art results across various speech understanding and generation tasks. Nonetheless, the capabilities of these encoders in low-resource settings are yet to be thoroughly explored. To address this, we conduct a comprehensive set of experiments using a representative set of 3 state-of-the-art encoders (Wav2vec2, WavLM, Whisper) in the low-resource setting across 7 speech understanding and generation tasks. We provide various quantitative and qualitative analyses on task performance, convergence speed, and representational properties of the encoders. We observe a connection between the pre-training protocols of these encoders and the way in which they capture information in their internal layers. In particular, we observe the Whisper encoder exhibits the greatest low-resource capabilities on content-driven tasks in terms of performance and convergence speed.<br />Comment: INTERSPEECH 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.17733
Document Type :
Working Paper