Back to Search
Start Over
Mapping high-performance RNNs to in-memory neuromorphic chips
- Publication Year :
- 2019
-
Abstract
- The increasing need for compact and low-power computing solutions for machine learning applications has triggered significant interest in energy-efficient neuromorphic systems. However, most of these architectures rely on spiking neural networks, which typically perform poorly compared to their non-spiking counterparts in terms of accuracy. In this paper, we propose a new adaptive spiking neuron model that can be abstracted as a low-pass filter. This abstraction enables faster and better training of spiking networks using back-propagation, without simulating spikes. We show that this model dramatically improves the inference performance of a recurrent neural network and validate it with three complex spatio-temporal learning tasks: the temporal addition task, the temporal copying task, and a spoken-phrase recognition task. We estimate at least 500x higher energy-efficiency using our models on compatible neuromorphic chips in comparison to Cortex-M4, a popular embedded microprocessor.
- Subjects :
- Signal Processing (eess.SP)
FOS: Computer and information sciences
Emerging Technologies (cs.ET)
Quantitative Biology::Neurons and Cognition
FOS: Electrical engineering, electronic engineering, information engineering
Computer Science - Neural and Evolutionary Computing
Computer Science - Emerging Technologies
Neural and Evolutionary Computing (cs.NE)
Electrical Engineering and Systems Science - Signal Processing
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....1201bafcafdf84675e6e3c05332b5e1b