Back to Search
Start Over
Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks.
- Source :
- IEEE Transactions on Cybernetics; Dec2022, Vol. 52 Issue 12, Part 1, p13250-13261, 12p
- Publication Year :
- 2022
-
Abstract
- This article deals with nonconvex stochastic optimization problems in deep learning. Appropriate learning rates, based on theory, for adaptive-learning-rate optimization algorithms (e.g., Adam and AMSGrad) to approximate the stationary points of such problems are provided. These rates are shown to allow faster convergence than previously reported for these algorithms. Specifically, the algorithms are examined in numerical experiments on text and image classification and are shown in experiments to perform better with constant learning rates than algorithms using diminishing learning rates. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 21682267
- Volume :
- 52
- Issue :
- 12, Part 1
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Cybernetics
- Publication Type :
- Academic Journal
- Accession number :
- 160690675
- Full Text :
- https://doi.org/10.1109/TCYB.2021.3107415