1. Gradient descent with adaptive stepsize converges (nearly) linearly under fourth-order growth
- Author
-
Davis, Damek, Drusvyatskiy, Dmitriy, and Jiang, Liwei
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,65K05, 65K10, 90C30, 90C06 - Abstract
A prevalent belief among optimization specialists is that linear convergence of gradient descent is contingent on the function growing quadratically away from its minimizers. In this work, we argue that this belief is inaccurate. We show that gradient descent with an adaptive stepsize converges at a local (nearly) linear rate on any smooth function that merely exhibits fourth-order growth away from its minimizer. The adaptive stepsize we propose arises from an intriguing decomposition theorem: any such function admits a smooth manifold around the optimal solution -- which we call the ravine -- so that the function grows at least quadratically away from the ravine and has constant order growth along it. The ravine allows one to interlace many short gradient steps with a single long Polyak gradient step, which together ensure rapid convergence to the minimizer. We illustrate the theory and algorithm on the problems of matrix sensing and factorization and learning a single neuron in the overparameterized regime., Comment: 58 pages, 5 figures
- Published
- 2024