Back to Search Start Over

Convergence Rates of Distributed Gradient Methods Under Random Quantization: A Stochastic Approximation Approach.

Authors :
Doan, Thinh T.
Maguluri, Siva Theja
Romberg, Justin
Source :
IEEE Transactions on Automatic Control. Oct2021, Vol. 66 Issue 10, p4469-4484. 16p.
Publication Year :
2021

Abstract

We study gradient methods for solving distributed convex optimization problems over a network when the communication bandwidth between the nodes is limited, and so information that is exchanged between the nodes must be quantized. This imperfect communication poses a fundamental challenge, and if not properly accounted for may prevent the convergence of these algorithms. Our first contribution in this article is to propose a modified consensus-based gradient method for solving such problems using random (dithered) quantization. This algorithm can be interpreted as a distributed variant of a two-time-scale stochastic approximation algorithm. We then study the convergence and derive upper bounds on the rates of convergence of the proposed method as a function of the bandwidth available between the nodes and the underlying network topology for both convex and strongly convex objective functions. Our results complement existing literature, where such convergence and explicit formulas of the convergence rates are missing. Finally, we provide numerical simulations to compare the convergence properties of the distributed gradient methods with and without quantization for solving the well-known regression problems over networks, for both quadratic and absolute loss functions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189286
Volume :
66
Issue :
10
Database :
Academic Search Index
Journal :
IEEE Transactions on Automatic Control
Publication Type :
Periodical
Accession number :
153732252
Full Text :
https://doi.org/10.1109/TAC.2020.3031018