Back to Search Start Over

Early stopping in Boosting

Authors :
Ivan Chang, Yuan-Chin
Huang, Yufen
Huang, Yu-Pai
Source :
Computational Statistics & Data Analysis. Oct2010, Vol. 54 Issue 10, p2203-2213. 11p.
Publication Year :
2010

Abstract

Abstract: It is well known that the boosting-like algorithms, such as AdaBoost and many of its modifications, may over-fit the training data when the number of boosting iterations becomes large. Therefore, how to stop a boosting algorithm at an appropriate iteration time is a longstanding problem for the past decade (see ). applied model selection criteria to estimate the stopping iteration for Boosting, but it is still necessary to compute all boosting iterations under consideration for the training data. Thus, the main purpose of this paper is focused on studying the early stopping rule for Boosting during the training stage to seek a very substantial computational saving. The proposed method is based on a change point detection method on the values of model selection criteria during the training stage. This method is also extended to two-class classification problems which are very common in medical and bioinformatics applications. A simulation study and a real data example to these approaches are provided for illustrations, and comparisons are made with LogitBoost. [Copyright &y& Elsevier]

Details

Language :
English
ISSN :
01679473
Volume :
54
Issue :
10
Database :
Academic Search Index
Journal :
Computational Statistics & Data Analysis
Publication Type :
Periodical
Accession number :
51130916
Full Text :
https://doi.org/10.1016/j.csda.2010.03.024