Back to Search Start Over

Efficient incremental construction of RBF networks using quasi-gradient method.

Authors :
Reiner, Philip
Wilamowski, Bogdan M.
Source :
Neurocomputing. Feb2015 Part B, Vol. 150, p349-356. 8p.
Publication Year :
2015

Abstract

Artificial Neural Networks have been found to be very efficient universal approximators. Single Layer Feedforward Networks (SLFN) are the most popular and easy to train. The neurons in these networks can use both sigmoidal functions and radial basis functions (RBF) as activation functions. Both functions have been shown to work very efficiently. Sigmoidal networks are already very well described in the literature. This paper will focus on the construction of a SLFN architecture using RBF neurons. There are many algorithms that are used to construct or train networks to solve function approximation problems. In this paper, an algorithm which is a modification of the Incremental Extreme Learning Machine (I-ELM) family of algorithms is proposed. The proposed algorithm eliminates randomness in the learning process with respect to center positions and widths of the RBF neurons. To do this, the input with the highest error magnitude is saved during error calculation and then used as the center for the next incrementally added neuron. Then the radius of the new neuron is iteratively chosen using Nelder–Mead׳s Simplex method. This allows the universal approximation properties of I-ELM to be preserved while greatly reducing the sizes of the trained RBF networks. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
150
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
99737139
Full Text :
https://doi.org/10.1016/j.neucom.2014.05.082