Back to Search
Start Over
LOSSGRAD: automatic learning rate in gradient descent
- Source :
- Schedae Informaticae, 2018, Volume 27
- Publication Year :
- 2019
-
Abstract
- In this paper, we propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal step-size in gradient descent), which automatically modifies the step-size in gradient descent during neural networks training. Given a function $f$, a point $x$, and the gradient $\nabla_x f$ of $f$, we aim to find the step-size $h$ which is (locally) optimal, i.e. satisfies: $$ h=arg\,min_{t \geq 0} f(x-t \nabla_x f). $$ Making use of quadratic approximation, we show that the algorithm satisfies the above assumption. We experimentally show that our method is insensitive to the choice of initial learning rate while achieving results comparable to other methods.<br />Comment: TFML 2019
Details
- Database :
- arXiv
- Journal :
- Schedae Informaticae, 2018, Volume 27
- Publication Type :
- Report
- Accession number :
- edsarx.1902.07656
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.4467/20838476SI.18.004.10409