Back to Search Start Over

SRNN: Self-regularized neural network.

Authors :
Xu, Chunyan
Yang, Jian
Gao, Junbin
Lai, Hanjiang
Yan, Shuicheng
Source :
Neurocomputing. Jan2018, Vol. 273, p260-270. 11p.
Publication Year :
2018

Abstract

In this work, we address to boost the discriminative capability of deep neural network by alleviating the over-fitting problem. Previous works often deal with the problem of learning a neural network by optimizing one or more objective functions with some existing regularization methods (such as dropout, weight decay, stochastic pooling, data augmentation, etc.). We argue that these approaches may be difficult to further improve the classification performance of a neural network, due to not well employing its own learned knowledge. In this paper, we introduce a self-regularized strategy for learning a neural network, named as a Self-Regularized Neural Network (SRNN). The intuition behind the SRNN is that the sample-wise soft targets of a neural network may have potentials to drag its own neural network out of its local optimum. More specifically, an initial neural network is firstly pre-trained by optimizing one or more objective functions with ground truth labels. We then gradually mine sample-wise soft targets, which enables to reveal the correlation/similarity among classes predicted from its own neural network. The parameters of neural network are further updated for fitting its sample-wise soft targets. This self-regularization learning procedure minimizes the objective function by integrating the sample-wise soft targets of neural network and the ground truth label of training samples. Three characteristics in this SRNN are summarized as: (1) gradually mining the learned knowledge from a single neural network, and then correcting and enhancing this part of learned knowledge, resulting in the sample-wise soft targets; (2) regularly optimizing the parameters of this neural network with their sample-wise soft targets; (3) boosting the discriminative capability of a neural network with the self-regularization strategy. Extensive experiments on four public datasets, i.e., CIFAR-10, CIFAR-100, Caltech101 and MIT, well demonstrate the effectiveness of the proposed SRNN for image classification. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
273
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
126009666
Full Text :
https://doi.org/10.1016/j.neucom.2017.07.051