Back to Search Start Over

LocalNewton: Reducing Communication Bottleneck for Distributed Learning

Authors :
Gupta, Vipul
Ghosh, Avishek
Derezinski, Michal
Khanna, Rajiv
Ramchandran, Kannan
Mahoney, Michael
Publication Year :
2021

Abstract

To address the communication bottleneck problem in distributed optimization within a master-worker framework, we propose LocalNewton, a distributed second-order algorithm with local averaging. In LocalNewton, the worker machines update their model in every iteration by finding a suitable second-order descent direction using only the data and model stored in their own local memory. We let the workers run multiple such iterations locally and communicate the models to the master node only once every few (say L) iterations. LocalNewton is highly practical since it requires only one hyperparameter, the number L of local iterations. We use novel matrix concentration-based techniques to obtain theoretical guarantees for LocalNewton, and we validate them with detailed empirical evaluation. To enhance practicability, we devise an adaptive scheme to choose L, and we show that this reduces the number of local iterations in worker machines between two model synchronizations as the training proceeds, successively refining the model quality at the master. Via extensive experiments using several real-world datasets with AWS Lambda workers and an AWS EC2 master, we show that LocalNewton requires fewer than 60% of the communication rounds (between master and workers) and less than 40% of the end-to-end running time, compared to state-of-the-art algorithms, to reach the same training~loss.<br />Comment: To be published in Uncertainty in Artificial Intelligence (UAI) 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2105.07320
Document Type :
Working Paper