1. Convergence result for the gradient-push algorithm and its application to boost up the Push-DIging algorithm
- Author
-
Choi, Hyogi, Choi, Woocheol, and Kim, Gwangil
- Subjects
Mathematics - Optimization and Control ,Electrical Engineering and Systems Science - Systems and Control - Abstract
The gradient-push algorithm is a fundamental algorithm for the distributed optimization problem \begin{equation} \min_{x \in \mathbb{R}^d} f(x) = \sum_{j=1}^n f_j (x), \end{equation} where each local cost $f_j$ is only known to agent $a_i$ for $1 \leq i \leq n$ and the agents are connected by a directed graph. In this paper, we obtain convergence results for the gradient-push algorithm with constant stepsize whose range is sharp in terms the order of the smoothness constant $L>0$. Precisely, under the two settings: 1) Each local cost $f_i$ is strongly convex and $L$-smooth, 2) Each local cost $f_i$ is convex quadratic and $L$-smooth while the aggregate cost $f$ is strongly convex, we show that the gradient-push algorithm with stepsize $\alpha>0$ converges to an $O(\alpha)$-neighborhood of the minimizer of $f$ for a range $\alpha \in (0, c/L]$ with a value $c>0$ independent of $L>0$. As a benefit of the result, we suggest a hybrid algorithm that performs the gradient-push algorithm with a relatively large stepsize $\alpha>0$ for a number of iterations and then go over to perform the Push-DIGing algorithm. It is verified by a numerical test that the hybrid algorithm enhances the performance of the Push-DIGing algorithm significantly. The convergence results of the gradient-push algorithm are also supported by numerical tests.
- Published
- 2024