301. Learning deep representations by multilayer bootstrap networks for speaker diarization
- Author
-
Li, Meng-Zhen and Zhang, Xiao-Lei
- Subjects
ComputingMethodologies_PATTERNRECOGNITION ,Audio and Speech Processing (eess.AS) ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
The performance of speaker diarization is strongly affected by its clustering algorithm at the test stage. However, it is known that clustering algorithms are sensitive to random noises and small variations, particularly when the clustering algorithms themselves suffer some weaknesses, such as bad local minima and prior assumptions. To deal with the problem, a compact representation of speech segments with small within-class variances and large between-class distances is usually needed. In this paper, we apply an unsupervised deep model, named multilayer bootstrap network (MBN), to further process the embedding vectors of speech segments for the above problem. MBN is an unsupervised deep model for nonlinear dimensionality reduction. Unlike traditional neural network based deep model, it is a stack of $k$-centroids clustering ensembles, each of which is trained simply by random resampling of data and one-nearest-neighbor optimization. We construct speaker diarization systems by combining MBN with either the i-vector frontend or x-vector frontend, and evaluated their effectiveness on a simulated NIST diarization dataset, the AMI meeting corpus, and NIST SRE 2000 CALLHOME database. Experimental results show that the proposed systems are better than or at least comparable to the systems that do not use MBN., Comment: 5 pages, 4figures,coference
- Published
- 2019
- Full Text
- View/download PDF