Back to Search
Start Over
Benchmark Non-volatile and Volatile Memory Based Hybrid Precision Synapses for In-situ Deep Neural Network Training
- Source :
- ASP-DAC
- Publication Year :
- 2020
- Publisher :
- IEEE, 2020.
-
Abstract
- Compute-in-memory (CIM) with emerging non-volatile memories (eNVMs) is time and energy efficient for deep neural network (DNN) inference. However, challenges still remain for in-situ DNN training with eNVMs due to the asymmetric weight update behavior, high programming latency and energy consumption. To overcome these challenges, a hybrid precision synapse combining eNVMs with capacitor has been proposed. It leverages the symmetric and fast weight update in the volatile capacitor, as well as the non-volatility and large dynamic range of the eNVMs. In this paper, in-situ DNN training architecture with hybrid precision synapses is proposed and benchmarked with the modified NeuroSim simulator. First, all the circuit modules required for in-situ training with hybrid precision synapses are designed. Then, the impact of weight transfer interval and limited capacitor retention time on training accuracy is investigated by incorporating hardware properties into Tensorflow simulation. Finally, a system-level benchmark is conducted for hybrid precision synapse compared with baseline design that is solely based on eNVMs.
- Subjects :
- 010302 applied physics
Artificial neural network
Computer science
Inference
02 engineering and technology
Energy consumption
01 natural sciences
020202 computer hardware & architecture
law.invention
Capacitor
Computer engineering
law
0103 physical sciences
Weight transfer
0202 electrical engineering, electronic engineering, information engineering
Latency (engineering)
Efficient energy use
Volatile memory
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)
- Accession number :
- edsair.doi...........88743101500fbcb971dee4b979d667a4