Back to Search Start Over

DLM: Decentralized Linearized Alternating Direction Method of Multipliers.

Authors :
Ling, Qing
Shi, Wei
Wu, Gang
Ribeiro, Alejandro
Source :
IEEE Transactions on Signal Processing. Aug2015, Vol. 63 Issue 15, p4051-4064. 14p.
Publication Year :
2015

Abstract

This paper develops the Decentralized Linearized Alternating Direction Method of Multipliers (DLM) that minimizes a sum of local cost functions in a multiagent network. The algorithm mimics operation of the decentralized alternating direction method of multipliers (DADMM) except that it linearizes the optimization objective at each iteration. This results in iterations that, instead of successive minimizations, implement steps whose cost is akin to the much lower cost of the gradient descent step used in the distributed gradient method (DGM). The algorithm is proven to converge to the optimal solution when the local cost functions have Lipschitz continuous gradients. Its rate of convergence is shown to be linear if the local cost functions are further assumed to be strongly convex. Numerical experiments in least squares and logistic regression problems show that the number of iterations to achieve equivalent optimality gaps are similar for DLM and ADMM and both much smaller than those of DGM. In that sense, DLM combines the rapid convergence of ADMM with the low computational burden of DGM. [ABSTRACT FROM PUBLISHER]

Details

Language :
English
ISSN :
1053587X
Volume :
63
Issue :
15
Database :
Academic Search Index
Journal :
IEEE Transactions on Signal Processing
Publication Type :
Academic Journal
Accession number :
103304398
Full Text :
https://doi.org/10.1109/TSP.2015.2436358