Back to Search Start Over

An Efficient Implementation of Convolutional Neural Network With CLIP-Q Quantization on FPGA.

Authors :
Cheng, Wei
Lin, Ing-Chao
Shih, Yun-Yang
Source :
IEEE Transactions on Circuits & Systems. Part I: Regular Papers. Oct2022, Vol. 69 Issue 10, p4093-4102. 10p.
Publication Year :
2022

Abstract

Convolutional neural networks (CNNs) have achieved tremendous success in the computer vision domain recently. The pursue for better model accuracy drives the model size and the storage requirements of CNNs as well as the computational complexity. Therefore, Compression Learning by InParallel Pruning-Quantization (CLIP-Q) was proposed to reduce a vast amount of weight storage requirements by using a few quantized segments to represent all weights in a CNN layer. Among various quantization strategies, CLIP-Q is suitable for hardware accelerators because it reduces model size significantly while maintaining the full-precision model accuracy. However, the current CLIP-Q approach did not consider the hardware characteristics and it is not straightforward when mapped to a CNN hardware accelerator. In this work, we propose a software-hardware codesign platform that includes a modified version of CLIP-Q algorithm and a hardware accelerator, which consists of $5\times 5$ reconfigurable convolutional arrays with input and output channel parallelization. Additionally, the proposed CNN accelerator maintains the same accuracy of a full-precision CNN in Cifar-10 and Cifar-100 datasets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15498328
Volume :
69
Issue :
10
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems. Part I: Regular Papers
Publication Type :
Periodical
Accession number :
160688649
Full Text :
https://doi.org/10.1109/TCSI.2022.3193031