Back to Search Start Over

Extreme Learning Machine With Affine Transformation Inputs in an Activation Function.

Authors :
Cao J
Zhang K
Yong H
Lai X
Chen B
Lin Z
Source :
IEEE transactions on neural networks and learning systems [IEEE Trans Neural Netw Learn Syst] 2019 Jul; Vol. 30 (7), pp. 2093-2107. Date of Electronic Publication: 2018 Nov 13.
Publication Year :
2019

Abstract

The extreme learning machine (ELM) has attracted much attention over the past decade due to its fast learning speed and convincing generalization performance. However, there still remains a practical issue to be approached when applying the ELM: the randomly generated hidden node parameters without tuning can lead to the hidden node outputs being nonuniformly distributed, thus giving rise to poor generalization performance. To address this deficiency, a novel activation function with an affine transformation (AT) on its input is introduced into the ELM, which leads to an improved ELM algorithm that is referred to as an AT-ELM in this paper. The scaling and translation parameters of the AT activation function are computed based on the maximum entropy principle in such a way that the hidden layer outputs approximately obey a uniform distribution. Application of the AT-ELM algorithm in nonlinear function regression shows its robustness to the range scaling of the network inputs. Experiments on nonlinear function regression, real-world data set classification, and benchmark image recognition demonstrate better performance for the AT-ELM compared with the original ELM, the regularized ELM, and the kernel ELM. Recognition results on benchmark image data sets also reveal that the AT-ELM outperforms several other state-of-the-art algorithms in general.

Details

Language :
English
ISSN :
2162-2388
Volume :
30
Issue :
7
Database :
MEDLINE
Journal :
IEEE transactions on neural networks and learning systems
Publication Type :
Academic Journal
Accession number :
30442621
Full Text :
https://doi.org/10.1109/TNNLS.2018.2877468