Back to Search Start Over

Large-scale Self-Supervised Speech Representation Learning for Automatic Speaker Verification

Authors :
Chen, Zhengyang
Chen, Sanyuan
Wu, Yu
Qian, Yao
Wang, Chengyi
Liu, Shujie
Qian, Yanmin
Zeng, Michael
Publication Year :
2021
Publisher :
arXiv, 2021.

Abstract

The speech representations learned from large-scale unlabeled data have shown better generalizability than those from supervised learning and thus attract a lot of interest to be applied for various downstream tasks. In this paper, we explore the limits of speech representations learned by different self-supervised objectives and datasets for automatic speaker verification (ASV), especially with a well-recognized SOTA ASV model, ECAPA-TDNN [1], as a downstream model. The representations from all hidden layers of the pre-trained model are firstly averaged with learnable weights and then fed into the ECAPA-TDNN as input features. The experimental results on Voxceleb dataset show that the weighted average representation is significantly superior to FBank, a conventional handcrafted feature for ASV. Our best single system achieves 0.537%, 0.569%, and 1.180% equal error rate (EER) on the three official trials of VoxCeleb1, separately. Accordingly, the ensemble system with three pre-trained models can further improve the EER to 0.479%, 0.536% and 1.023%. Among the three evaluation trials, our best system outperforms the winner system [2] of the VoxCeleb Speaker Recognition Challenge 2021 (VoxSRC2021) on the VoxCeleb1-E trial.<br />Comment: Accepted by ICASSP 2022

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....f8012a6d173f472913f8f0498a9a11e8
Full Text :
https://doi.org/10.48550/arxiv.2110.05777