Back to Search
Start Over
Local Quadratic Convergence of Stochastic Gradient Descent with Adaptive Step Size
- Publication Year :
- 2021
-
Abstract
- Establishing a fast rate of convergence for optimization methods is crucial to their applicability in practice. With the increasing popularity of deep learning over the past decade, stochastic gradient descent and its adaptive variants (e.g. Adagrad, Adam, etc.) have become prominent methods of choice for machine learning practitioners. While a large number of works have demonstrated that these first order optimization methods can achieve sub-linear or linear convergence, we establish local quadratic convergence for stochastic gradient descent with adaptive step size for problems such as matrix inversion.<br />Comment: ICML 2021 Workshop on Beyond first-order methods in ML systems
- Subjects :
- Mathematics - Optimization and Control
Computer Science - Machine Learning
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2112.14872
- Document Type :
- Working Paper