Back to Search Start Over

SparseNN: A Performance-Efficient Accelerator for Large-Scale Sparse Neural Networks.

Authors :
Lu, Yuntao
Wang, Chao
Gong, Lei
Zhou, Xuehai
Source :
International Journal of Parallel Programming; Aug2018, Vol. 46 Issue 4, p648-659, 12p
Publication Year :
2018

Abstract

Neural networks have been widely used as a powerful representation in various research domains, such as computer vision, natural language processing, and artificial intelligence, etc. To achieve better effect of applications, the increasing number of neurons and synapses make neural networks both computationally and memory intensive, furthermore difficult to deploy on resource-limited platforms. Sparse methods can reduce redundant neurons and synapses, but conventional accelerators cannot benefit from the sparsity. In this paper, we propose an efficient accelerating method for sparse neural networks, which compresses synapse weights and processes the compressed structure by an FPGA accelerator. Our method will achieve 40 and 20% compression ratio of synapse weights in convolutional and full-connected layers. The experiment results demonstrate that our accelerating method can boost an FPGA accelerator to achieve 3×<inline-graphic></inline-graphic> speedup over a conventional one. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08857458
Volume :
46
Issue :
4
Database :
Complementary Index
Journal :
International Journal of Parallel Programming
Publication Type :
Academic Journal
Accession number :
131277872
Full Text :
https://doi.org/10.1007/s10766-017-0528-8