1. Regularized diffusion adaptation via conjugate smoothing
- Author
-
Ali H. Sayed, Stefan Vlaski, and Lieven Vandenberghe
- Subjects
Signal Processing (eess.SP) ,FOS: Computer and information sciences ,0209 industrial biotechnology ,cs.DC ,Computer science ,02 engineering and technology ,Multi-objective optimization ,Convolution ,020901 industrial engineering & automation ,Statistics - Machine Learning ,0102 Applied Mathematics ,Differentiable function ,Diffusion (business) ,Mathematics - Optimization and Control ,proximal diffusion ,eess.SP ,proximal operator ,adaptive networks ,stat.ML ,Computer Science Applications ,0906 Electrical and Electronic Engineering ,Computer Science - Distributed, Parallel, and Cluster Computing ,Probability distribution ,eigenvalues and eigenfunctions ,nonsmooth regularizer ,distributed optimization ,optimization ,Smoothing ,Multiagent Systems (cs.MA) ,0913 Mechanical Engineering ,Mathematical optimization ,Machine Learning (stat.ML) ,electrical engineering ,algorithms ,least-mean squares ,FOS: Mathematics ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Multiagent Systems ,Electrical and Electronic Engineering ,Electrical Engineering and Systems Science - Signal Processing ,linear matrix inequalities ,convergence ,math.OC ,Aggregate (data warehouse) ,smoothing ,Function (mathematics) ,regularized diffusion ,diffusion strategy ,smoothing methods ,Industrial Engineering & Automation ,cost function ,Control and Systems Engineering ,Optimization and Control (math.OC) ,consensus ,aggregates ,pareto optimization ,Distributed, Parallel, and Cluster Computing (cs.DC) ,cs.MA - Abstract
The purpose of this article is to develop and study a decentralized strategy for Pareto optimization of an aggregate cost consisting of regularized risks. Each risk is modeled as the expectation of some loss function with unknown probability distribution, while the regularizers are assumed deterministic, but are not required to be differentiable or even continuous. The individual, regularized, cost functions are distributed across a strongly connected network of agents, and the Pareto optimal solution is sought by appealing to a multiagent diffusion strategy. To this end, the regularizers are smoothed by means of infimal convolution, and it is shown that the Pareto solution of the approximate smooth problem can be made arbitrarily close to the solution of the original nonsmooth problem. Performance bounds are established under conditions that are weaker than assumed before in the literature and, hence, applicable to a broader class of adaptation and learning problems.
- Published
- 2021