Back to Search Start Over

Newton method for ℓ0-regularized optimization

Authors :
Naihua Xiu
Lili Pan
Shenglong Zhou
Source :
Numerical Algorithms. 88:1541-1570
Publication Year :
2021
Publisher :
Springer Science and Business Media LLC, 2021.

Abstract

As a tractable approach, regularization is frequently adopted in sparse optimization. This gives rise to regularized optimization, which aims to minimize the ℓ0 norm or its continuous surrogates that characterize the sparsity. From the continuity of surrogates to the discreteness of the ℓ0 norm, the most challenging model is the ℓ0-regularized optimization. There is an impressive body of work on the development of numerical algorithms to overcome this challenge. However, most of the developed methods only ensure that either the (sub)sequence converges to a stationary point from the deterministic optimization perspective or that the distance between each iteration and any given sparse reference point is bounded by an error bound in the sense of probability. In this paper, we develop a Newton-type method for the ℓ0-regularized optimization and prove that the generated sequence converges to a stationary point globally and quadratically under the standard assumptions, theoretically explaining that our method can perform surprisingly well.

Details

ISSN :
15729265 and 10171398
Volume :
88
Database :
OpenAIRE
Journal :
Numerical Algorithms
Accession number :
edsair.doi.dedup.....0113ed43a7af29226fe9b96295fe84a8