Back to Search
Start Over
Recursive Binary Neural Network Training Model for Efficient Usage of On-Chip Memory.
- Source :
-
IEEE Transactions on Circuits & Systems. Part I: Regular Papers . Jul2019, Vol. 66 Issue 7, p2593-2605. 13p. - Publication Year :
- 2019
-
Abstract
- We present a novel deep learning model for a neural network that reduces both computation and data storage overhead. To do so, the proposed model proposes and combines a binary-weight neural network (BNN) training, a storage reuse technique, and an incremental training scheme. The storage requirements can be tuned to meet the desired classification accuracy, storing more parameters on an on-chip memory, and thereby reducing off-chip data storage accesses. Our experiments show 4–6 $\times $ reduction in weight storage footprint when training binary deep neural network models. On the FPGA platform, this results in a reduced amount of off-chip accesses, enabling our model to train a neural network in $14\times $ shorter latency, as compared to the conventional BNN training method. [ABSTRACT FROM AUTHOR]
- Subjects :
- *DEEP learning
*ARTIFICIAL neural networks
*CLOUD storage
*MEMORY
Subjects
Details
- Language :
- English
- ISSN :
- 15498328
- Volume :
- 66
- Issue :
- 7
- Database :
- Academic Search Index
- Journal :
- IEEE Transactions on Circuits & Systems. Part I: Regular Papers
- Publication Type :
- Periodical
- Accession number :
- 137116445
- Full Text :
- https://doi.org/10.1109/TCSI.2019.2895216