Back to Search Start Over

Error Bound and Isocost Imply Linear Convergence of DCA-Based Algorithms to D-Stationarity.

Authors :
Tao, Min
Li, Jiang-Ning
Source :
Journal of Optimization Theory & Applications; Apr2023, Vol. 197 Issue 1, p205-232, 28p
Publication Year :
2023

Abstract

We consider a class of structured nonsmooth difference-of-convex minimization, which can be written as the difference of two convex functions possibly nonsmooth with the second one in the format of the maximum of a finite convex smooth functions. We propose two extrapolation proximal difference-of-convex-based algorithms for potential acceleration to converge to a weak/standard d-stationary point of the structured nonsmooth problem, and prove its linear convergence of these algorithms under the assumptions of piecewise error bound and piecewise isocost condition. As a product, we refine the linear convergence analysis of sDCA and ε -DCA in a recent work of Dong and Tao (J Optim Theory Appl 189: 190–220, 2021) by removing the assumption of locally linear regularity regarding the intersection of certain stationary sets and dominance regions. We also discuss sufficient conditions to guarantee these assumptions and illustrate that several sparse learning models satisfy all these assumptions. Finally, we conduct some elementary numerical simulations on sparse recovery to verify the theoretical results empirically. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00223239
Volume :
197
Issue :
1
Database :
Complementary Index
Journal :
Journal of Optimization Theory & Applications
Publication Type :
Academic Journal
Accession number :
162969939
Full Text :
https://doi.org/10.1007/s10957-023-02171-x