Back to Search Start Over

A distributed optimisation framework combining natural gradient with Hessian-free for discriminative sequence training.

Authors :
Haider, Adnan
Zhang, Chao
Kreyssig, Florian L.
Woodland, Philip C.
Source :
Neural Networks. Nov2021, Vol. 143, p537-549. 13p.
Publication Year :
2021

Abstract

This paper presents a novel natural gradient and Hessian-free (NGHF) optimisation framework for neural network training that can operate efficiently in a distributed manner. It relies on the linear conjugate gradient (CG) algorithm to combine the natural gradient (NG) method with local curvature information from Hessian-free (HF). A solution to a numerical issue in CG allows effective parameter updates to be generated with far fewer CG iterations than usually used (e.g. 5-8 instead of 200). This work also presents a novel preconditioning approach to improve the progress made by individual CG iterations for models with shared parameters. Although applicable to other training losses and model structures, NGHF is investigated in this paper for lattice-based discriminative sequence training for hybrid hidden Markov model acoustic models using a standard recurrent neural network, long short-term memory, and time delay neural network models for output probability calculation. Automatic speech recognition experiments are reported on the multi-genre broadcast data set for a range of different acoustic model types. These experiments show that NGHF achieves larger word error rate reductions than standard stochastic gradient descent or Adam, while requiring orders of magnitude fewer parameter updates. • Large batch optimisation: NGHF. Combines natural gradient (NG) & Hessian-free (HF). • Faster convergence of each update estimated via improved conjugate gradient. • Applied NG, HF & NGHF to discriminative sequence training for speech recognition. • NG, HF, and NGHF require orders of magnitude fewer parameter updates than Adam & SGD. • MPE training with NGHF achieves lower word error rates than with NG, HF, Adam & SGD. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08936080
Volume :
143
Database :
Academic Search Index
Journal :
Neural Networks
Publication Type :
Academic Journal
Accession number :
152773870
Full Text :
https://doi.org/10.1016/j.neunet.2021.05.011