1. On the Influence of Ill-conditioned Regression Matrix on Hyper-parameter Estimators for Kernel-based Regularization Methods
- Author
-
Yue Ju, Tianshi Chen, Lennart Ljung, and Biqiang Mu
- Subjects
0209 industrial biotechnology ,020208 electrical & electronic engineering ,Estimator ,02 engineering and technology ,Scale factor ,Matrix (mathematics) ,020901 industrial engineering & automation ,Rate of convergence ,Kernel (statistics) ,Linear regression ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Condition number ,Mathematics - Abstract
In this paper, we study the influence of ill-conditioned regression matrix on two hyper-parameter estimation methods for the kernel-based regularization method: the empirical Bayes (EB) and the Stein’s unbiased risk estimator (SURE). First, we consider the convergence rate of the cost functions of EB and SURE, and we find that they have the same convergence rate but the influence of the ill-conditioned regression matrix on the scale factor are different: for upper bounds, the scale factor for SURE contains one more factor cond(ΦTΦ) than that of EB, where Φ is the regression matrix and cond(•) denotes the condition number of a matrix. This finding indicates that when Φ is ill-conditioned, i.e., cond(ΦTΦ) is large, the cost function of SURE converges slower than that of EB. Then we consider the convergence rate of the optimal hyper-parameters of EB and SURE, and we find that they are both asymptotically normally distributed and have the same convergence rate, but the influence of the ill-conditioned regression matrix on the scale factor are different. In particular, for the ridge regression case, we show that the optimal hyperparameter of SURE converges slower than that of EB with a factor of 1/n2, as cond(ΦTΦ) goes to ∞, where n is the FIR model order.
- Published
- 2020