Back to Search Start Over

Combining Spatial Clustering with LSTM Speech Models for Multichannel Speech Enhancement

Authors :
Grezes, Felix
Ni, Zhaoheng
Trinh, Viet Anh
Mandel, Michael
Publication Year :
2020

Abstract

Recurrent neural networks using the LSTM architecture can achieve significant single-channel noise reduction. It is not obvious, however, how to apply them to multi-channel inputs in a way that can generalize to new microphone configurations. In contrast, spatial clustering techniques can achieve such generalization, but lack a strong signal model. This paper combines the two approaches to attain both the spatial separation performance and generality of multichannel spatial clustering and the signal modeling performance of multiple parallel single-channel LSTM speech enhancers. The system is compared to several baselines on the CHiME3 dataset in terms of speech quality predicted by the PESQ algorithm and word error rate of a recognizer trained on mis-matched conditions, in order to focus on generalization. Our experiments show that by combining the LSTM models with the spatial clustering, we reduce word error rate by 4.6\% absolute (17.2\% relative) on the development set and 11.2\% absolute (25.5\% relative) on test set compared with spatial clustering system, and reduce by 10.75\% (32.72\% relative) on development set and 6.12\% absolute (15.76\% relative) on test data compared with LSTM model.<br />Comment: arXiv admin note: text overlap with arXiv:2012.01576, arXiv:2012.02191

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2012.03388
Document Type :
Working Paper