Back to Search Start Over

Memory Access Optimization for On-Chip Transfer Learning.

Authors :
Hussain, Muhammad Awais
Tsai, Tsung-Han
Source :
IEEE Transactions on Circuits & Systems. Part I: Regular Papers. Apr2021, Vol. 68 Issue 4, p1507-1519. 13p.
Publication Year :
2021

Abstract

Training of Deep Neural Network (DNN) at the edge faces the challenge of high energy consumption due to the requirements of a large number of memory accesses for gradient calculations. Therefore, it is necessary to minimize data fetches to perform training of a DNN model on the edge. In this paper, a novel technique has been proposed to reduce the memory access for the training of fully connected layers in transfer learning. By analyzing the memory access patterns in the backpropagation phase in fully connected layers, the memory access can be optimized. We introduce a new method to update the weights by introducing the delta term for every node of output and fully connected layer. Delta term aims to reduce memory access for the parameters which are required to access repeatedly during the training process of fully connected layers. The proposed technique shows 0.13x-13.93x energy savings for the training of fully connected layers for famous DNN architectures on multiple processor architectures. The proposed technique can be used to perform transfer learning on-chip to reduce energy consumption as well as memory access. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15498328
Volume :
68
Issue :
4
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems. Part I: Regular Papers
Publication Type :
Periodical
Accession number :
149121983
Full Text :
https://doi.org/10.1109/TCSI.2021.3055281