Back to Search Start Over

24-GOPS 4.5-mm^2 Digital Cellular Neural Network for Rapid Visual Attention in an Object-Recognition SoC.

Authors :
Lee, Seungjin
Kim, Minsu
Kim, Kwanho
Kim, Joo-Young
Yoo, Hoi-Jun
Source :
IEEE Transactions on Neural Networks. 01/01/2011, Vol. 22 Issue 1, p64-73. 10p.
Publication Year :
2011

Abstract

This paper presents the Visual Attention Engine (VAE), which is a digital cellular neural network (CNN) that executes the VA algorithm to speed up object-recognition. The proposed time-multiplexed processing element (TMPE) CNN topology achieves high performance and small area by integrating 4800 (80\,\times\,60) cells and 120 PEs. Pipelined operation of the PEs and single-cycle global shift capability of the cells result in a high PE utilization ratio of 93%. The cells are implemented by 6T static random access memory-based register files and dynamic shift registers to enable a small area of 4.5 mm^2. The bus connections between PEs and cells are optimized to minimize power consumption. The VAE is integrated within an object-recognition system-on-chip (SoC) fabricated in the 0.13-\mum complementary metal-oxide—semiconductor process. It achieves 24 GOPS peak performance and 22 GOPS sustained performance at 200 MHz enabling one CNN iteration on an 80\,\times\,60 pixel image to be completed in just 4.3 \mus. With VA enabled using the VAE, the workload of the object-recognition SoC is significantly reduced, resulting in 83% higher frame rate while consuming 45% less energy per frame without degradation of recognition accuracy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10459227
Volume :
22
Issue :
1
Database :
Academic Search Index
Journal :
IEEE Transactions on Neural Networks
Publication Type :
Academic Journal
Accession number :
57254354
Full Text :
https://doi.org/10.1109/TNN.2010.2085443