Back to Search Start Over

Explicit guiding auto-encoders for learning meaningful representation.

Authors :
Sun, Yanan
Mao, Hua
Sang, Yongsheng
Yi, Zhang
Source :
Neural Computing & Applications. Mar2017, Vol. 28 Issue 3, p429-436. 8p.
Publication Year :
2017

Abstract

The auto-encoder model plays a crucial role in the success of deep learning. During the pre-training phase, auto-encoders learn a representation that helps improve the performance of the entire neural network during the fine-tuning phase of deep learning. However, the learned representation is not always meaningful and the network does not necessarily achieve higher performance with such representation because auto-encoders are trained in an unsupervised manner without knowing the specific task targeted in the fine-tuning phase. In this paper, we propose a novel approach to train auto-encoders by adding an explicit guiding term to the traditional reconstruction cost function that encourages the auto-encoder to learn meaningful features. Particularly, the guiding term is the classification error with respect to the representation learned by the auto-encoder, and a meaningful representation means that a network using the representation as input has a low classification error in a classification task. In our experiments, we show that the additional explicit guiding term helps the auto-encoder understand the prospective target in advance. During learning, it can drive the learning toward a minimum with better generalization with respect to the particular supervised task on the dataset. Over a range of image classification benchmarks, we achieve equal or superior results to baseline auto-encoders with the same configuration. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09410643
Volume :
28
Issue :
3
Database :
Academic Search Index
Journal :
Neural Computing & Applications
Publication Type :
Academic Journal
Accession number :
121237311
Full Text :
https://doi.org/10.1007/s00521-015-2082-x