Back to Search Start Over

DyNNamic: Dynamically Reshaping, High Data-Reuse Accelerator for Compact DNNs

Authors :
Hanson, Edward
Li, Shiyu
Qian, Xuehai
Li, Hai Helen
Chen, Yiran
Source :
IEEE Transactions on Computers; 2023, Vol. 72 Issue: 3 p880-892, 13p
Publication Year :
2023

Abstract

Convolutional layers dominate the computation and energy costs of Deep Neural Network (DNN) inference. Recent algorithmic works attempt to reduce these bottlenecks via compact DNN structures and model compression. Likewise, state-of-the-art accelerator designs leverage spatiotemporal characteristics of convolutional layers to reduce data movement overhead and improve throughput. Although both are independently effective at reducing latency and energy costs, combining these approaches does not guarantee cumulative improvements due to inefficient mapping. This inefficiency can be attributed to (1) inflexibility of underlying hardware and (2) inherent reduction of data-reuse opportunities of compact DNN structures. To address these issues, we propose a dynamically reshaping, high data-reuse PE array accelerator, namely DyNNamic. DyNNamic leverages kernel-wise filter decomposition to partition the convolution operation into two compact stages: Shared Kernels Convolution (SKC) and Weighted Accumulation (WA). Because both stages have vastly different dimensions, DyNNamic reshapes its PE array to effectively map the algorithm to the architecture. The architecture then exploits data-reuse opportunities created by the SKC stage, further reducing data movement with negligible overhead. We evaluate our approach on various representative networks and compare against state-of-the-art accelerators. On average, DyNNamic outperforms DianNao by <inline-formula><tex-math notation="LaTeX">$8.4\times$</tex-math><alternatives> <mml:math> <mml:mrow> <mml:mn>8</mml:mn> <mml:mo>.</mml:mo> <mml:mn>4</mml:mn> <mml:mo>×</mml:mo> </mml:mrow> </mml:math> <inline-graphic xlink:href="hanson-ieq1-3184272.gif"/></alternatives></inline-formula> and <inline-formula><tex-math notation="LaTeX">$12.3\times$</tex-math><alternatives> <mml:math> <mml:mrow> <mml:mn>12</mml:mn> <mml:mo>.</mml:mo> <mml:mn>3</mml:mn> <mml:mo>×</mml:mo> </mml:mrow> </mml:math> <inline-graphic xlink:href="hanson-ieq2-3184272.gif"/></alternatives></inline-formula> in terms of inference energy and latency, respectively.

Details

Language :
English
ISSN :
00189340 and 15579956
Volume :
72
Issue :
3
Database :
Supplemental Index
Journal :
IEEE Transactions on Computers
Publication Type :
Periodical
Accession number :
ejs62299095
Full Text :
https://doi.org/10.1109/TC.2022.3184272