Back to Search
Start Over
Convergence results of a nested decentralized gradient method for non-strongly convex problems
- Publication Year :
- 2021
-
Abstract
- We are concerned with the convergence of NEAR-DGD$^+$ (Nested Exact Alternating Recursion Distributed Gradient Descent) method introduced to solve the distributed optimization problems. Under the assumption of the strong convexity of local objective functions and the Lipschitz continuity of their gradients, the linear convergence is established in \cite{BBKW - Near DGD}. In this paper, we investigate the convergence property of NEAR-DGD$^+$ in the absence of strong convexity. More precisely, we establish the convergence results in the following two cases: (1) When only the convexity is assumed on the objective function. (2) When the objective function is represented as a composite function of a strongly convex function and a rank deficient matrix, which falls into the class of convex and quasi-strongly convex functions. Numerical results are provided to support the convergence results.<br />Comment: 28 pages, Typos were fixed and new convergence results were added for the case when the communication number is constant, to appear in J. Optim. Theory Appl
- Subjects :
- Mathematics - Optimization and Control
90C25, 68Q25
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2108.02129
- Document Type :
- Working Paper