Back to Search Start Over

The Hardware and Algorithm Co-Design for Energy-Efficient DNN Processor on Edge/Mobile Devices

Authors :
Hoi-Jun Yoo
Jinsu Lee
Sanghoon Kang
Dongjoo Shin
Donghyeon Han
Jinmook Lee
Source :
IEEE Transactions on Circuits and Systems I: Regular Papers. 67:3458-3470
Publication Year :
2020
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2020.

Abstract

Deep neural network (DNN) has been widely studied due to its high performance and usability for various applications such as image classification, detection, segmentation, translation, and action recognition. Thanks to the universal applications and high performance of DNN algorithm, DNN is adopted for various AI platforms, including edge/mobile devices as well as cloud servers. However, high-performance DNN requires a large amount of computation and memory access, making it challenging to implement DNN operation on edge/mobile. There have been several ways to solve these problems, including algorithms as well as hardware for DNN. Algorithms that help accelerate DNN in hardware enable much more efficient operation of high-performance AI. This article aims to provide an overview of the recent hardware and algorithm co-design schemes enabling efficient processing of DNNs. Specifically, it will provide algorithm optimization methods for DNN structure, neurons, synapses, and data types. This paper also introduces optimization methods for hardware architectures, PE array, data-path control, and microarchitecture of PE. And we will also show examples of DNN algorithm and hardware co-designed ASICs.

Details

ISSN :
15580806 and 15498328
Volume :
67
Database :
OpenAIRE
Journal :
IEEE Transactions on Circuits and Systems I: Regular Papers
Accession number :
edsair.doi...........d2c3535c2bf8ef447d2a4e1c2f509ee2
Full Text :
https://doi.org/10.1109/tcsi.2020.3021397