1. Incremental gradient-free method for nonsmooth distributed optimization
- Author
-
Guoquan Li, Changzhi Wu, Kwang Hyo Jung, Xiangyu Wang, Jueyou Li, Jae-Myung Lee, and Zhiyou Wu
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,021103 operations research ,Control and Optimization ,Computer science ,Applied Mathematics ,Strategy and Management ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,Regular polygon ,Cyclic order ,02 engineering and technology ,Function (mathematics) ,computer.software_genre ,Atomic and Molecular Physics, and Optics ,020901 industrial engineering & automation ,Component (UML) ,Convex optimization ,Convergence (routing) ,Minification ,Data mining ,Business and International Management ,Electrical and Electronic Engineering ,Subgradient method ,computer - Abstract
In this paper we consider the minimization of the sum of local convex component functions distributed over a multi-agent network. We first extend the Nesterov's random gradient-free method to the incremental setting. Then we propose the incremental gradient-free methods, including a cyclic order and a randomized order in the selection of component function. We provide the convergence and iteration complexity analysis of the proposed methods under some suitable stepsize rules. To illustrate our proposed methods, extensive numerical results on a distributed $l_1$-regression problem are presented. Compared with existing incremental subgradient-based methods, our methods only require the evaluation of the function values rather than subgradients, which may be preferred by practical engineers.
- Published
- 2017
- Full Text
- View/download PDF