Back to Search Start Over

Inference, Learning and Attention Mechanisms that Exploit and Preserve Sparsity in Convolutional Networks

Authors :
Hackel, Timo
Usvyatsov, Mikhail
Galliani, Silvano
Wegner, Jan D.
Schindler, Konrad
Publication Year :
2018

Abstract

While CNNs naturally lend themselves to densely sampled data, and sophisticated implementations are available, they lack the ability to efficiently process sparse data. In this work we introduce a suite of tools that exploit sparsity in both the feature maps and the filter weights, and thereby allow for significantly lower memory footprints and computation times than the conventional dense framework when processing data with a high degree of sparsity. Our scheme provides (i) an efficient GPU implementation of a convolution layer based on direct, sparse convolution; (ii) a filter step within the convolution layer, which we call attention, that prevents fill-in, i.e., the tendency of convolution to rapidly decrease sparsity, and guarantees an upper bound on the computational resources; and (iii) an adaptation of the back-propagation algorithm, which makes it possible to combine our approach with standard learning frameworks, while still exploiting sparsity in the data and the model.<br />Comment: Updated to IJCV version

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1801.10585
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/s11263-020-01302-5