Back to Search Start Over

Efficient Leave-One-Out Cross-Validation-based Regularized Extreme Learning Machine.

Authors :
Shao, Zhifei
Er, Meng Joo
Source :
Neurocomputing. Jun2016, Vol. 194, p260-270. 11p.
Publication Year :
2016

Abstract

It is well known that the Leave-One-Out Cross-Validation (LOO-CV) is a highly reliable procedure in terms of model selection. Unfortunately, it is an extremely tedious method and has rarely been deployed in practical applications. In this paper, a highly efficient Leave-One-Out Cross-Validation (LOO-CV) formula has been developed and integrated with the popular Regularized Extreme Learning Machine (RELM). The main contribution of this paper is the proposed algorithm, termed as Efficient LOO-CV-based RELM (ELOO-RELM), that can effectively and efficiently update the LOO-CV error with every regularization parameter and automatically select the optimal model with limited user intervention. Rigorous analysis of computational complexity shows that the ELOO-RELM, including the tuning process, can achieve similar efficiency as the original RELM with pre-defined parameter, in which both scale linearly with the size of the training data. An early termination criterion is also introduced to further speed up the learning process. Experimentation studies on benchmark datasets show that the ELOO-RELM can achieve comparable generalization performance as the Support Vector Machines (SVM) with significantly higher learning efficiency. More importantly, comparing to the trial and error tuning procedure employed by the original RELM, the ELOO-RELM can provide more reliable results by the virtue of incorporating the LOO-CV procedure. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
194
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
114874390
Full Text :
https://doi.org/10.1016/j.neucom.2016.02.058