Back to Search Start Over

On Global Linear Convergence in Stochastic Nonconvex Optimization for Semidefinite Programming.

Authors :
Zeng, Jinshan
Ma, Ke
Yao, Yuan
Source :
IEEE Transactions on Signal Processing; 8/15/2019, Vol. 67 Issue 16, p4261-4275, 15p
Publication Year :
2019

Abstract

Nonconvex reformulations via low-rank factorization for stochastic convex semidefinite optimization problem have attracted arising attention due to their empirical efficiency and scalability. Compared with the original convex formulations, the nonconvex ones typically involve much fewer variables, allowing them to scale to scenarios with millions of variables. However, it opens a new challenge that under what conditions the nonconvex stochastic algorithms may find the population minimizer within the optimal statistical precision despite their empirical success in applications. In this paper, we provide an answer that the stochastic gradient descent (SGD) method can be adapted to solve the nonconvex reformulation of the original convex problem, with a global linear convergence when using a fixed step size, i.e., converging exponentially fast to the population minimizer within an optimal statistical precision in the restricted strongly convex case. If a diminishing step size is adopted, the bad effect caused by the variance of gradients on the optimization error can be eliminated but the rate is dropped to be sublinear. The core of our treatment relies on a novel second-order descent lemma, which is more general than the existing best result in the literature and improves the analysis on both online and batch algorithms. The developed theoretical results and effectiveness of the suggested SGD are also verified by a series of experiments. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1053587X
Volume :
67
Issue :
16
Database :
Complementary Index
Journal :
IEEE Transactions on Signal Processing
Publication Type :
Academic Journal
Accession number :
138231967
Full Text :
https://doi.org/10.1109/TSP.2019.2925609