Back to Search Start Over

Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning

Authors :
Chakrabarti, Kushal
Gupta, Nirupam
Chopra, Nikhil
Publication Year :
2020

Abstract

This paper considers the multi-agent distributed linear least-squares problem. The system comprises multiple agents, each agent with a locally observed set of data points, and a common server with whom the agents can interact. The agents' goal is to compute a linear model that best fits the collective data points observed by all the agents. In the server-based distributed settings, the server cannot access the data points held by the agents. The recently proposed Iteratively Pre-conditioned Gradient-descent (IPG) method has been shown to converge faster than other existing distributed algorithms that solve this problem. In the IPG algorithm, the server and the agents perform numerous iterative computations. Each of these iterations relies on the entire batch of data points observed by the agents for updating the current estimate of the solution. Here, we extend the idea of iterative pre-conditioning to the stochastic settings, where the server updates the estimate and the iterative pre-conditioning matrix based on a single randomly selected data point at every iteration. We show that our proposed Iteratively Pre-conditioned Stochastic Gradient-descent (IPSG) method converges linearly in expectation to a proximity of the solution. Importantly, we empirically show that the proposed IPSG method's convergence rate compares favorably to prominent stochastic algorithms for solving the linear least-squares problem in server-based networks.<br />Comment: Changes in the replacement: Application to distributed state estimation problem has been added in Appendix B. Related articles: arXiv:2003.07180v2 [math.OC] and arXiv:2008.02856v1 [math.OC]

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2011.07595
Document Type :
Working Paper