Back to Search
Start Over
CONVERGENCE RATE OF O(1/k) FOR OPTIMISTIC GRADIENT AND EXTRAGRADIENT METHODS IN SMOOTH CONVEX-CONCAVE SADDLE POINT PROBLEMS.
- Source :
-
SIAM Journal on Optimization . 2020, Vol. 30 Issue 4, p3230-3251. 22p. - Publication Year :
- 2020
-
Abstract
- We study the iteration complexity of the optimistic gradient descent-ascent (OGDA) method and the extragradient (EG) method for finding a saddle point of a convex-concave unconstrained min-max problem. To do so, we first show that both OGDA and EG can be interpreted as approximate variants of the proximal point method. This is similar to the approach taken in (A. Nemirovski (2004), SIAM J. Optim., 15, pp. 229-251) which analyzes EG as an approximation of the "conceptual mirror prox." In this paper, we highlight how gradients used in OGDA and EG try to approximate the gradient of the proximal point method. We then exploit this interpretation to show that both algorithms produce iterates that remain within a bounded set. We further show that the primal-dual gap of the averaged iterates generated by both of these algorithms converge with a rate of O(1/k). Our theoretical analysis is of interest as it provides the first convergence rate estimate for OGDA in the general convex-concave setting. Moreover, it provides a simple convergence analysis for the EG algorithm in terms of function value without using a compactness assumption. [ABSTRACT FROM AUTHOR]
- Subjects :
- *SADDLERY
*ALGORITHMS
*MIRRORS
Subjects
Details
- Language :
- English
- ISSN :
- 10526234
- Volume :
- 30
- Issue :
- 4
- Database :
- Academic Search Index
- Journal :
- SIAM Journal on Optimization
- Publication Type :
- Academic Journal
- Accession number :
- 148458027
- Full Text :
- https://doi.org/10.1137/19M127375X