1. Accelerating On-Chip Training with Ferroelectric-Based Hybrid Precision Synapse.
- Author
-
YANDONG LUO, PANNI WANG, and SHIMENG YU
- Subjects
DYNAMIC random access memory ,STATIC random access memory ,SYNAPSES ,FIELD-effect transistors ,ENERGY consumption - Abstract
In this article, we propose a hardware accelerator design using ferroelectric transistor (FeFET)-based hybrid precision synapse (HPS) for deep neural network (DNN) on-chip training. The drain erase scheme for FeFET programming is incorporated for both FeFET HPS design and FeFET buffer design. By using drain erase, high-density FeFET buffers can be integrated onchip to store the intermediate input-output activations and gradients, which reduces the energy consuming off-chip DRAM access. Architectural evaluation results show that the energy efficiency could be improved by 1.2x ~ 2.1x, 3.9x ~ 6.0x compared to the other HPS-based designs and emerging non-volatile memory baselines, respectively. The chip area is reduced by 19% ~ 36% compared with designs using SRAM on-chip buffer even though the capacity of FeFET buffer is increased. Besides, by utilizing drain erase scheme for FeFET programming, the chip area is reduced by 11% ~ 28.5% compared with the designs using body erase scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF