Back to Search
Start Over
Extreme Learning Machine With Affine Transformation Inputs in an Activation Function.
- Source :
- IEEE Transactions on Neural Networks & Learning Systems; Jul2019, Vol. 30 Issue 7, p2093-2107, 15p
- Publication Year :
- 2019
-
Abstract
- The extreme learning machine (ELM) has attracted much attention over the past decade due to its fast learning speed and convincing generalization performance. However, there still remains a practical issue to be approached when applying the ELM: the randomly generated hidden node parameters without tuning can lead to the hidden node outputs being nonuniformly distributed, thus giving rise to poor generalization performance. To address this deficiency, a novel activation function with an affine transformation (AT) on its input is introduced into the ELM, which leads to an improved ELM algorithm that is referred to as an AT-ELM in this paper. The scaling and translation parameters of the AT activation function are computed based on the maximum entropy principle in such a way that the hidden layer outputs approximately obey a uniform distribution. Application of the AT-ELM algorithm in nonlinear function regression shows its robustness to the range scaling of the network inputs. Experiments on nonlinear function regression, real-world data set classification, and benchmark image recognition demonstrate better performance for the AT-ELM compared with the original ELM, the regularized ELM, and the kernel ELM. Recognition results on benchmark image data sets also reveal that the AT-ELM outperforms several other state-of-the-art algorithms in general. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 2162237X
- Volume :
- 30
- Issue :
- 7
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Neural Networks & Learning Systems
- Publication Type :
- Periodical
- Accession number :
- 137117533
- Full Text :
- https://doi.org/10.1109/TNNLS.2018.2877468