Back to Search
Start Over
Connection Pruning for Deep Spiking Neural Networks with On-Chip Learning
- Publication Year :
- 2020
-
Abstract
- Long training time hinders the potential of the deep, large-scale Spiking Neural Network (SNN) with the on-chip learning capability to be realized on the embedded systems hardware. Our work proposes a novel connection pruning approach that can be applied during the on-chip Spike Timing Dependent Plasticity (STDP)-based learning to optimize the learning time and the network connectivity of the deep SNN. We applied our approach to a deep SNN with the Time To First Spike (TTFS) coding and has successfully achieved 2.1x speed-up and 64% energy savings in the on-chip learning and reduced the network connectivity by 92.83%, without incurring any accuracy loss. Moreover, the connectivity reduction results in 2.83x speed-up and 78.24% energy savings in the inference. Evaluation of our proposed approach on the Field Programmable Gate Array (FPGA) platform revealed 0.56% power overhead was needed to implement the pruning algorithm.<br />Comment: 8 pages, 9 figures This paper has been accepted for publication in the International Conference on Neuromorphic Systems (ICONS) 2021
Details
- Database :
- OAIster
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1228437429
- Document Type :
- Electronic Resource
- Full Text :
- https://doi.org/10.1145.3477145.3477157