Back to Search Start Over

Adaptive sparse dropout: Learning the certainty and uncertainty in deep neural networks.

Authors :
Chen, Yuanyuan
Yi, Zhang
Source :
Neurocomputing. Aug2021, Vol. 450, p354-361. 8p.
Publication Year :
2021

Abstract

Dropout is an important training method for deep neural networks, because it can help avoid over-fitting. Traditional dropout methods and many extended dropout methods, omit some of the neurons' activation values according to the probabilities. These methods calculate the activation probability of neurons using the designed formula, without providing a plausible explanation of the calculation method. This paper proposes an adaptive sparse dropout (AS-Dropout) method for neural network training. The algorithm maps the neurons' activation values in a layer to a relative linear range of a sigmoid function, determines the ratio of active neurons by a probability calculation process, and drops most of neurons according to the probabilities. The probability calculation depends on the activation values of the neurons. The selection of active neurons is according to the probabilities. Therefore, AS-Dropout learns both the certainty and uncertainty in deep neural networks. Additionally, since only a small number of neurons are active, AS-Dropout increases the sparsity of the network. We applied AS-Dropout in different neural network structures. When evaluated on MNIST, COIL-100, and Caltech-101 datasets, the experimental results demonstrated that, overall, AS-Dropout substantially outperformed the traditional dropout and some improved dropout methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
450
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
150696801
Full Text :
https://doi.org/10.1016/j.neucom.2021.04.047