Back to Search Start Over

High-dimensional linear regression via implicit regularization.

Authors :
Zhao, Peng
Yang, Yun
He, Qiao-Chu
Source :
Biometrika. Dec2022, Vol. 109 Issue 4, p1033-1046. 14p.
Publication Year :
2022

Abstract

Many statistical estimators for high-dimensional linear regression are |$M$| -estimators, formed through minimizing a data-dependent square loss function plus a regularizer. This work considers a new class of estimators implicitly defined through a discretized gradient dynamic system under overparameterization. We show that, under suitable restricted isometry conditions, overparameterization leads to implicit regularization: if we directly apply gradient descent to the residual sum of squares with sufficiently small initial values then, under some proper early stopping rule, the iterates converge to a nearly sparse rate-optimal solution that improves over explicitly regularized approaches. In particular, the resulting estimator does not suffer from extra bias due to explicit penalties, and can achieve the parametric root- |$n$| rate when the signal-to-noise ratio is sufficiently high. We also perform simulations to compare our methods with high-dimensional linear regression with explicit regularization. Our results illustrate the advantages of using implicit regularization via gradient descent after overparameterization in sparse vector estimation. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00063444
Volume :
109
Issue :
4
Database :
Academic Search Index
Journal :
Biometrika
Publication Type :
Academic Journal
Accession number :
160485594
Full Text :
https://doi.org/10.1093/biomet/asac010