Back to Search Start Over

ParaML: A Polyvalent Multicore Accelerator for Machine Learning.

Authors :
Zhou, Shengyuan
Guo, Qi
Du, Zidong
Liu, Daofu
Chen, Tianshi
Li, Ling
Liu, Shaoli
Zhou, Jinhong
Temam, Olivier
Feng, Xiaobing
Zhou, Xuehai
Chen, Yunji
Source :
IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems; Sep2020, Vol. 39 Issue 9, p1764-1777, 14p
Publication Year :
2020

Abstract

In recent years, machine learning (ML) techniques are proven to be powerful tools in various emerging applications. Traditionally, ML techniques are processed on general-purpose CPUs and GPUs, but their energy efficiencies are limited due to their excessive support for flexibility. As an efficient alternative to CPUs/GPUs, hardware accelerators are still limited as they often accommodate only a single ML technique (family). However, different problems may require different ML techniques, which implies that such accelerators may achieve poor learning accuracy or even be ineffective. In this paper, we present a polyvalent accelerator architecture integrated with multiple processing cores, called ParaML, which accommodates ten representative ML techniques, including $k$ -means, $k$ -nearest neighbors ($k$ -NN), naive Bayes (NB), support vector machine (SVM), linear regression (LR), classification tree (CT), deep neural network (DNN), learning vector quantization (LVQ), parzen window (PW), and principal component analysis (PCA). Benefited from our thorough analysis on computational primitives and locality properties of different ML techniques, the single-core ParaML can perform up to 1056 GOP/s (e.g., additions and multiplications) in an area of 3.51 mm2 and consumes 596 mW only, estimated by ICC and PrimeTime PX with post-synthesis netlist, respectively. Compared with the NVIDIA K20M GPU (28-nm process), the single-core ParaML (65-nm process) is $1.21\times $ faster, and can reduce the energy by $137.93\times $. We also compare the single-core ParaML with other accelerators. Compared with PRINS, single-core ParaML achieves $72.09\times $ and $2.57\times $ energy benefit for $k$ -NN and $k$ -means, respectively, and speeds up each query in $k$ -NN by $44.76\times $. Compared with EIE, the single-core ParaML achieves $5.02\times $ speedup and $4.97\times $ energy benefit with $11.62\times $ less area when evaluating with dense DNN. Compared with TPU, the single-core ParaML achieves $2.45\times $ better power efficiency (5647 Gop/W versus 2300 Gop/W) with $321.36\times $ less area. Compared to the single-core version, the 8-core ParaML will further improve the speedup up to $3.98\times $ with an area of 13.44 mm2 and a power of 2036 mW. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02780070
Volume :
39
Issue :
9
Database :
Complementary Index
Journal :
IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems
Publication Type :
Academic Journal
Accession number :
145287425
Full Text :
https://doi.org/10.1109/TCAD.2019.2927523