Back to Search Start Over

Feature learning for stacked ELM via low-rank matrix factorization

Authors :
Tinghui Ouyang
Source :
Neurocomputing. 448:82-93
Publication Year :
2021
Publisher :
Elsevier BV, 2021.

Abstract

Extreme-learning-machine based auto-encoder (ELM-AE) is regarded as a useful architecture with fast learning speed and general approximation ability, and stacked ELM is used to develop efficient and effective deep learning networks. However, considering features learned from conventional ELM-AEs have issues of weak nonlinear representation ability and random factors in feature projection, this paper proposes an improved ELM-AE architecture which utilize low-rank matrix factorization to learn optimal low-dimensional features. Two superiorities can be obtained compared to conventional ELM-AEs. One is the dimensionality of the hidden layer in ELM-AE could be set arbitrarily, e.g. a higher-dimension hidden layer could lower the random effect in feature learning and enhance features representation ability. The other is enhancing features nonlinear ability, since features are learned directly from the nonlinear outputs of hidden layer. Finally, comparison experiments on numerical and image datasets are implemented in this paper to verify the superior performance of the proposed ELM-AE in this paper.

Details

ISSN :
09252312
Volume :
448
Database :
OpenAIRE
Journal :
Neurocomputing
Accession number :
edsair.doi...........77219f48676f10df32bc4e79f88bcfeb