1. JBNN: A Hardware Design for Binarized Neural Networks Using Single-Flux-Quantum Circuits.
- Author
-
Fu, Rongliang, Huang, Junying, Wu, Haibin, Ye, Xiaochun, Fan, Dongrui, and Ho, Tsung-Yi
- Subjects
- *
INDUSTRIAL capacity , *JOSEPHSON junctions , *COLUMNS , *SUPERCONDUCTIVITY , *HARDWARE , *SUPERCONDUCTING circuits - Abstract
As a high-performance application of low-temperature superconductivity, superconducting single-flux-quantum (SFQ) circuits have high speed and low-power consumption characteristics, which have recently received extensive attention, especially in the field of neural network inference accelerations. Despite these promising advantages, they are still limited by storage capacity and manufacture reliability, making them unfriendly for feedback loops and very large-scale circuits. The Binarized Neural Network (BNN), with minimal memory requirements and no reliance on multiplication, is undoubtedly an attractive candidate for implementing inference hardware using SFQ circuits. This work presents the first SFQ-based Binarized Neural Network inference accelerator, namely JBNN, with a new representation to binarize weights and activation variables. Every SFQ gate is essentially a pipeline stage, making conventional design methods of the accumulator unsuitable for SFQ circuits. So an SFQ-based accumulative parallel counter using SFQ logic cells including T1, OR, and AND is designed to realize the accumulation, where the data size is reduced to a quarter after passing the XNOR column and the AU layer, largely declining the hardware cost. Our evaluation shows that the proposed design outperforms a cryogenic CMOS-based BNN accelerator design running at 77K by 70.92 times while maintaining 97.89% accuracy on the MNIST benchmark dataset. Without the cooling cost, the power efficiency increases up to 929.18 times. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF