Back to Search Start Over

Sparseout: Controlling Sparsity in Deep Networks

Authors :
Khan, Najeeb
Stavness, Ian
Publication Year :
2019

Abstract

Dropout is commonly used to help reduce overfitting in deep neural networks. Sparsity is a potentially important property of neural networks, but is not explicitly controlled by Dropout-based regularization. In this work, we propose Sparseout a simple and efficient variant of Dropout that can be used to control the sparsity of the activations in a neural network. We theoretically prove that Sparseout is equivalent to an $L_q$ penalty on the features of a generalized linear model and that Dropout is a special case of Sparseout for neural networks. We empirically demonstrate that Sparseout is computationally inexpensive and is able to control the desired level of sparsity in the activations. We evaluated Sparseout on image classification and language modelling tasks to see the effect of sparsity on these tasks. We found that sparsity of the activations is favorable for language modelling performance while image classification benefits from denser activations. Sparseout provides a way to investigate sparsity in state-of-the-art deep learning models. Source code for Sparseout could be found at \url{https://github.com/najeebkhan/sparseout}.<br />Comment: Code: https://github.com/najeebkhan/sparseout

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1904.08050
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/978-3-030-18305-9_24