Back to Search Start Over

Fast speaker adaptation using extended diagonal linear transformation for deep neural networks.

Authors :
Kim, Donghyun
Kim, Sanghun
Source :
ETRI Journal; Feb2019, Vol. 41 Issue 1, p109-116, 8p
Publication Year :
2019

Abstract

This paper explores new techniques that are based on a hidden‐layer linear transformation for fast speaker adaptation used in deep neural networks (DNNs). Conventional methods using affine transformations are ineffective because they require a relatively large number of parameters to perform. Meanwhile, methods that employ singular‐value decomposition (SVD) are utilized because they are effective at reducing adaptive parameters. However, a matrix decomposition is computationally expensive when using online services. We propose the use of an extended diagonal linear transformation method to minimize adaptation parameters without SVD to increase the performance level for tasks that require smaller degrees of adaptation. In Korean large vocabulary continuous speech recognition (LVCSR) tasks, the proposed method shows significant improvements with error‐reduction rates of 8.4% and 17.1% in five and 50 conversational sentence adaptations, respectively. Compared with the adaptation methods using SVD, there is an increased recognition performance with fewer parameters. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
12256463
Volume :
41
Issue :
1
Database :
Complementary Index
Journal :
ETRI Journal
Publication Type :
Academic Journal
Accession number :
134664986
Full Text :
https://doi.org/10.4218/etrij.2017-0087