Back to Search Start Over

Exploring WavLM on Speech Enhancement

Authors :
Song, Hyungchan
Chen, Sanyuan
Chen, Zhuo
Wu, Yu
Yoshioka, Takuya
Tang, Min
Shin, Jong Won
Liu, Shujie
Publication Year :
2022

Abstract

There is a surge in interest in self-supervised learning approaches for end-to-end speech encoding in recent years as they have achieved great success. Especially, WavLM showed state-of-the-art performance on various speech processing tasks. To better understand the efficacy of self-supervised learning models for speech enhancement, in this work, we design and conduct a series of experiments with three resource conditions by combining WavLM and two high-quality speech enhancement systems. Also, we propose a regression-based WavLM training objective and a noise-mixing data configuration to further boost the downstream enhancement performance. The experiments on the DNS challenge dataset and a simulation dataset show that the WavLM benefits the speech enhancement task in terms of both speech quality and speech recognition accuracy, especially for low fine-tuning resources. For the high fine-tuning resource condition, only the word error rate is substantially improved.<br />Comment: Accepted by IEEE SLT 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2211.09988
Document Type :
Working Paper