Back to Search Start Over

Correspondence between neuroevolution and gradient descent.

Authors :
Whitelam S
Selin V
Park SW
Tamblyn I
Source :
Nature communications [Nat Commun] 2021 Nov 02; Vol. 12 (1), pp. 6317. Date of Electronic Publication: 2021 Nov 02.
Publication Year :
2021

Abstract

We show analytically that training a neural network by conditioned stochastic mutation or neuroevolution of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise. Averaged over independent realizations of the learning process, neuroevolution is equivalent to gradient descent on the loss function. We use numerical simulation to show that this correspondence can be observed for finite mutations, for shallow and deep neural networks. Our results provide a connection between two families of neural-network training methods that are usually considered to be fundamentally different.<br /> (© 2021. The Author(s).)

Details

Language :
English
ISSN :
2041-1723
Volume :
12
Issue :
1
Database :
MEDLINE
Journal :
Nature communications
Publication Type :
Academic Journal
Accession number :
34728632
Full Text :
https://doi.org/10.1038/s41467-021-26568-2