Back to Search Start Over

End-to-end 100-TOPS/W Inference With Analog In-Memory Computing: Are We There Yet?

Authors :
Ottavi, Gianmarco
Karunaratne, Geethan
Conti, Francesco
Boybat, Irem
Benini, Luca
Rossi, Davide
Source :
2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)
Publication Year :
2021

Abstract

In-Memory Acceleration (IMA) promises major efficiency improvements in deep neural network (DNN) inference, but challenges remain in the integration of IMA within a digital system. We propose a heterogeneous architecture coupling 8 RISC-V cores with an IMA in a shared-memory cluster, analyzing the benefits and trade-offs of in-memory computing on the realistic use case of a MobileNetV2 bottleneck layer. We explore several IMA integration strategies, analyzing performance, area, and energy efficiency. We show that while pointwise layers achieve significant speed-ups over software implementation, on depthwise layer the inability to efficiently map parameters on the accelerator leads to a significant trade-off between throughput and area. We propose a hybrid solution where pointwise convolutions are executed on IMA while depthwise on the cluster cores, achieving a speed-up of 3x over SW execution while saving 50% of area when compared to an all-in IMA solution with similar performance.<br />Comment: 4 pages,6 figures, conference

Details

Database :
arXiv
Journal :
2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)
Publication Type :
Report
Accession number :
edsarx.2109.01404
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/AICAS51828.2021.9458409