Back to Search Start Over

Convergence and objective functions of noise-injected multilayer perceptrons with hidden multipliers.

Authors :
Wang, Xiangyu
Wang, Jian
Zhang, Kai
Lin, Feng
Chang, Qin
Source :
Neurocomputing. Sep2021, Vol. 452, p796-812. 17p.
Publication Year :
2021

Abstract

Artificial neural networks (ANNs) are known to be sensitive to the initial setting of parameters and the network architecture, such as the number of hidden nodes in multilayer perceptron (MLP). In this paper, we focus on a network structure which can help to find the proper number of hidden nodes in MLP. In this structure, so called Multilayer Perceptrons with Hidden Multipliers (MLPHM), each of the hidden nodes is associated with a tunable "gate" multiplier. With a specific regularization term, each gate tends to be opened or closed completely at the end of the training, and finally a pruned network is obtained. To study the fault tolerance and to improve the generalization of MLPHM, a noise-injected training scheme is proposed, with both multiplicative noise and additive noise taken into consideration. The objective functions and convergence theorems of the noise-injected training algorithms are obtained, and the latter have been verified by simulations. Applications to several UCI datasets have demonstrated that the proposed algorithms have efficient pruning ability and superior generalization ability. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
452
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
150770459
Full Text :
https://doi.org/10.1016/j.neucom.2020.03.119