Back to Search Start Over

Exploring the Sparsity-Quantization Interplay on a Novel Hybrid SNN Event-Driven Architecture

Authors :
Aliyev, Ilkin
Lopez, Jesus
Adegbija, Tosiron
Source :
Design, Automation and Test in Europe Conference 2025
Publication Year :
2024

Abstract

Spiking Neural Networks (SNNs) offer potential advantages in energy efficiency but currently trail Artificial Neural Networks (ANNs) in versatility, largely due to challenges in efficient input encoding. Recent work shows that direct coding achieves superior accuracy with fewer timesteps than traditional rate coding. However, there is a lack of specialized hardware to fully exploit the potential of direct-coded SNNs, especially their mix of dense and sparse layers. This work proposes the first hybrid inference architecture for direct-coded SNNs. The proposed hardware architecture comprises a dense core to efficiently process the input layer and sparse cores optimized for event-driven spiking convolutions. Furthermore, for the first time, we investigate and quantify the quantization effect on sparsity. Our experiments on two variations of the VGG9 network and implemented on a Xilinx Virtex UltraScale+ FPGA (Field-Programmable Gate Array) reveal two novel findings. Firstly, quantization increases the network sparsity by up to 15.2% with minimal loss of accuracy. Combined with the inherent low power benefits, this leads to a 3.4x improvement in energy compared to the full-precision version. Secondly, direct coding outperforms rate coding, achieving a 10% improvement in accuracy and consuming 26.4x less energy per image. Overall, our accelerator achieves 51x higher throughput and consumes half the power compared to previous work. Our accelerator code is available at: https://github.com/githubofaliyev/SNN-DSE/tree/DATE25

Details

Database :
arXiv
Journal :
Design, Automation and Test in Europe Conference 2025
Publication Type :
Report
Accession number :
edsarx.2411.15409
Document Type :
Working Paper