Back to Search Start Over

Why Deep Learning Works: A Manifold Disentanglement Perspective.

Authors :
Brahma, Pratik Prabhanjan
Wu, Dapeng
She, Yiyuan
Source :
IEEE Transactions on Neural Networks & Learning Systems. Oct2016, Vol. 27 Issue 10, p1997-2008. 12p.
Publication Year :
2016

Abstract

Deep hierarchical representations of the data have been found out to provide better informative features for several machine learning applications. In addition, multilayer neural networks surprisingly tend to achieve better performance when they are subject to an unsupervised pretraining. The booming of deep learning motivates researchers to identify the factors that contribute to its success. One possible reason identified is the flattening of manifold-shaped data in higher layers of neural networks. However, it is not clear how to measure the flattening of such manifold-shaped data and what amount of flattening a deep neural network can achieve. For the first time, this paper provides quantitative evidence to validate the flattening hypothesis. To achieve this, we propose a few quantities for measuring manifold entanglement under certain assumptions and conduct experiments with both synthetic and real-world data. Our experimental results validate the proposition and lead to new insights on deep learning. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2162237X
Volume :
27
Issue :
10
Database :
Academic Search Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
118249114
Full Text :
https://doi.org/10.1109/TNNLS.2015.2496947