Back to Search Start Over

Leveraging Self-Supervised Learning for Speaker Diarization

Authors :
Han, Jiangyu
Landini, Federico
Rohdin, Johan
Silnova, Anna
Diez, Mireia
Burget, Lukas
Publication Year :
2024

Abstract

End-to-end neural diarization has evolved considerably over the past few years, but data scarcity is still a major obstacle for further improvements. Self-supervised learning methods such as WavLM have shown promising performance on several downstream tasks, but their application on speaker diarization is somehow limited. In this work, we explore using WavLM to alleviate the problem of data scarcity for neural diarization training. We use the same pipeline as Pyannote and improve the local end-to-end neural diarization with WavLM and Conformer. Experiments on far-field AMI, AISHELL-4, and AliMeeting datasets show that our method substantially outperforms the Pyannote baseline and achieves new state-of-the-art results on AMI and AISHELL-4, respectively. In addition, by analyzing the system performance under different data quantity scenarios, we show that WavLM representations are much more robust against data scarcity than filterbank features, enabling less data hungry training strategies. Furthermore, we found that simulated data, usually used to train endto-end diarization models, does not help when using WavLM in our experiments. Additionally, we also evaluate our model on the recent CHiME8 NOTSOFAR-1 task where it achieves better performance than the Pyannote baseline. Our source code is publicly available at https://github.com/BUTSpeechFIT/DiariZen.<br />Comment: Submitted to ICASSP 2025; New results are updated but conclusions are exactly the same as the original one

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.09408
Document Type :
Working Paper