1. Optimal Primal-Dual Algorithm with Last iterate Convergence Guarantees for Stochastic Convex Optimization Problems
- Author
-
Boob, Digvijay and Khalafi, Mohammad
- Subjects
Mathematics - Optimization and Control - Abstract
This paper proposes a novel first-order algorithm that solves composite nonsmooth and stochastic convex optimization problem with function constraints. Most of the works in the literature provide convergence rate guarantees on the average-iterate solution. There is growing interest in the convergence guarantees of the last iterate solution due to its favorable structural properties, such as sparsity or privacy guarantees and good performance in practice. We provide the first method that obtains the best-known convergence rate guarantees on the last iterate for stochastic composite nonsmooth convex function-constrained optimization problems. Our novel and easy-to-implement algorithm is based on the augmented Lagrangian technique and uses a new linearized approximation of constraint functions, leading to its name, the Augmented Constraint Extrapolation (Aug-ConEx) method. We show that Aug-ConEx achieves $\mathcal{O}(1/\sqrt{K})$ convergence rate in the nonsmooth stochastic setting without any strong convexity assumption and $\mathcal{O}(1/K)$ for the same problem with strongly convex objective function. While optimal for nonsmooth and stochastic problems, the Aug-ConEx method also accelerates convergence in terms of Lipschitz smoothness constants to $\mathcal{O}(1/K)$ and $\mathcal{O}(1/K^2)$ in the aforementioned cases, respectively. To our best knowledge, this is the first method to obtain such differentiated convergence rate guarantees on the last iterate for a composite nonsmooth stochastic setting without additional $\log{K}$ factors. We validate the efficiency of our algorithm by comparing it with a state-of-the-art algorithm through numerical experiments., Comment: 26 Pages, 3 Figures
- Published
- 2024