103 results on '"spike timing dependent plasticity"'
Search Results
2. Consciousness driven Spike Timing Dependent Plasticity
- Author
-
Yadav, Sushant, Chaudhary, Santosh, Kumar, Rajesh, and Nkomozepi, Pilani
- Published
- 2025
- Full Text
- View/download PDF
3. A spiking binary neuron — detector of causal links
- Author
-
Kiselev, Mikhail V., Larionov, Denis Aleksandrovich, and Andrey, Urusov M.
- Subjects
spiking neural network ,binary neuron ,spike timing dependent plasticity ,dopamine-modulated plasticity ,anti-hebbian plasticity ,reinforcement learning ,neuromorphic hardware ,Physics ,QC1-999 - Abstract
Purpose. Causal relationship recognition is a fundamental operation in neural networks aimed at learning behavior, action planning, and inferring external world dynamics. This operation is particularly crucial for reinforcement learning (RL). In the context of spiking neural networks (SNNs), events are represented as spikes emitted by network neurons or input nodes. Detecting causal relationships within these events is essential for effective RL implementation. Methods. This research paper presents a novel approach to realize causal relationship recognition using a simple spiking binary neuron. The proposed method leverages specially designed synaptic plasticity rules, which are both straightforward and efficient. Notably, our approach accounts for the temporal aspects of detected causal links and accommodates the representation of spiking signals as single spikes or tight spike sequences (bursts), as observed in biological brains. Furthermore, this study places a strong emphasis on the hardware-friendliness of the proposed models, ensuring their efficient implementation on modern and future neuroprocessors. Results. Being compared with precise machine learning techniques, such as decision tree algorithms and convolutional neural networks, our neuron demonstrates satisfactory accuracy despite its simplicity. Conclusion. We introduce a multi-neuron structure capable of operating in more complex environments with enhanced accuracy, making it a promising candidate for the advancement of RL applications in SNNs.
- Published
- 2024
- Full Text
- View/download PDF
4. Real-time execution of SNN models with synaptic plasticity for handwritten digit recognition on SIMD hardware.
- Author
-
Vallejo-Mancero, Bernardo, Madrenas, Jordi, and Zapata, Mireya
- Subjects
ARTIFICIAL neural networks ,PROCESS capability ,DATABASES ,PARALLEL processing ,NEUROPLASTICITY - Abstract
Recent advancements in neuromorphic computing have led to the development of hardware architectures inspired by Spiking Neural Networks (SNNs) to emulate the efficiency and parallel processing capabilities of the human brain. This work focuses on testing the HEENS architecture, specifically designed for high parallel processing and biological realism in SNN emulation, implemented on a ZYNQ family FPGA. The study applies this architecture to the classification of digits using the well-known MNIST database. The image resolutions were adjusted to match HEENS' processing capacity. Results were compared with existing work, demonstrating HEENS' performance comparable to other solutions. This study highlights the importance of balancing accuracy and efficiency in the execution of applications. HEENS offers a flexible solution for SNN emulation, allowing for the implementation of programmable neural and synapticmodels. It encourages the exploration of novel algorithms and network architectures, providing an alternative for real-time processing with efficient energy consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Real-time execution of SNN models with synaptic plasticity for handwritten digit recognition on SIMD hardware
- Author
-
Bernardo Vallejo-Mancero, Jordi Madrenas, and Mireya Zapata
- Subjects
HEENS ,neuromorphic hardware ,spiking neural network ,LIF model ,Spike Timing Dependent Plasticity ,MNIST dataset ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Recent advancements in neuromorphic computing have led to the development of hardware architectures inspired by Spiking Neural Networks (SNNs) to emulate the efficiency and parallel processing capabilities of the human brain. This work focuses on testing the HEENS architecture, specifically designed for high parallel processing and biological realism in SNN emulation, implemented on a ZYNQ family FPGA. The study applies this architecture to the classification of digits using the well-known MNIST database. The image resolutions were adjusted to match HEENS' processing capacity. Results were compared with existing work, demonstrating HEENS' performance comparable to other solutions. This study highlights the importance of balancing accuracy and efficiency in the execution of applications. HEENS offers a flexible solution for SNN emulation, allowing for the implementation of programmable neural and synaptic models. It encourages the exploration of novel algorithms and network architectures, providing an alternative for real-time processing with efficient energy consumption.
- Published
- 2024
- Full Text
- View/download PDF
6. Theoretical investigations into principles of topographic map formation and applications
- Author
-
Gale, Nicholas, Eglen, Stephen, and Franze, Kristian
- Subjects
Chemotaxis ,Datascience ,Dynamical Systems ,EphA3 ,GPU acceleration ,Neural Development ,Retintopy ,Spike timing dependent plasticity ,Superior Colliculus ,Travelling Salesman Problem - Abstract
Topographic maps are ubiquitous brain structures that are fundamental to sensory and higher order systems and are composed of connections between two regions obeying the relationship: physically neighbouring cells in a pre-synaptic region connect to physically neighbouring cells in the post-synaptic region. The developmental principles driving topographic map formation are usually studied within the context of genetic perturbations coupled to high resolution measurements and for these the mouse retinotopic map from retina to superior colliculus has emerged as a useful experimental context. Modelling coupled with genetic perturbation experiments has revealed three key developmental mechanisms: neural activity, chemotaxis, and competition. Some principal challenges in modelling this development include explaining the role of the spatio-temporal structure of patterned neural activity, determining the relative interaction between developmental components, and developing models that are sufficiently computationally efficient that statistical methodologies can be applied to them. Neural activity is a well measured component of retinotopic development and several independent measurement techniques have recorded the existence of spatiotemporally patterned waves at key critical points during development. Existing modelling methodologies reduce this rich spatiotemporal context into a distance dependent correlation function and have subsequently had challenges making quantitative predictions about the effect of manipulating these activity patterns. A neural field theory approach is used to develop mathematical theory which can incorporate these spatiotemporal structures. Bayesian MCMC regression analysis is performed on biological measurements to assess the accuracy of the model and make predictions about the time-scale on which activity operates. This time scale is tuned to the length of an average wave pattern suggesting the system is integrating all information in these waves. The interaction between chemotaxis and neural activity has historically been thought of as linearly independent. A recent study which perturbs both developmental mechanisms simultaneously has suggested that these two are highly stochastic and regular development depends on a critical fine- tuned balance between the two: the heterozygous phenotype was observed to present as both a wild-type and homozygote for different specimens. This hypothesis is tested against the data-set used to generate it. Recreating the entire experimental pipeline in silico with the most parsimonious existing model is able to account for the data without the need to appeal to stochasticity in the mechanisms. A statistical analysis demonstrates that the heterozygous state does not significantly overlap with the heterozygotes and that the stochasticity is likely due to the measurement technique. The existing models are computationally demanding; at least O(n3 ) in the number of retinal cells instantiated by the model. This computational demand renders these classes of models incapable of performing statistical regression and means that their parameters spaces are largely unexplored. A modelling framework which integrates the core operating mechanisms of the model is developed and when implemented on modern GPU computational architectures is able to achieve a near- linear time complexity scaling. This model is demonstrated to capture the explanatory power of existing modelling methodologies. Finally, the role of competition is explored in a dimensional reduction framework: the Elastic Net. The Elastic Net has been used both as a heuristic optimiser (validated on the NP-complete Travel- ling Salesman Problem) and to explain the development of cortical feature maps. The addition of competition is demonstrated to act as a counter-measure to the retinotopic distorting components of the Elastic Net as a cortical map generator. Further analysis demonstrates that competition substantially improves heuristic performance on the Travelling Salesman Problem making it competitive against state of the art solvers when performance is normalised by solution times. The heuristic converges on a length scaling law that is discussed in the context of wire-minimisation problem.
- Published
- 2022
- Full Text
- View/download PDF
7. Toward Learning in Neuromorphic Circuits Based on Quantum Phase Slip Junctions
- Author
-
Cheng, Ran, Goteti, Uday S, Walker, Harrison, Krause, Keith M, Oeding, Luke, and Hamilton, Michael C
- Subjects
Biological Psychology ,Biomedical and Clinical Sciences ,Neurosciences ,Psychology ,Affordable and Clean Energy ,quantum phase slip junction ,Josephson junction ,neuromorphic computing ,spike timing dependent plasticity ,unsupervised learning ,coupled synapse networks ,Cognitive Sciences ,Biological psychology - Abstract
We explore the use of superconducting quantum phase slip junctions (QPSJs), an electromagnetic dual to Josephson Junctions (JJs), in neuromorphic circuits. These small circuits could serve as the building blocks of neuromorphic circuits for machine learning applications because they exhibit desirable properties such as inherent ultra-low energy per operation, high speed, dense integration, negligible loss, and natural spiking responses. In addition, they have a relatively straight-forward micro/nano fabrication, which shows promise for implementation of an enormous number of lossless interconnections that are required to realize complex neuromorphic systems. We simulate QPSJ-only, as well as hybrid QPSJ + JJ circuits for application in neuromorphic circuits including artificial synapses and neurons, as well as fan-in and fan-out circuits. We also design and simulate learning circuits, where a simplified spike timing dependent plasticity rule is realized to provide potential learning mechanisms. We also take an alternative approach, which shows potential to overcome some of the expected challenges of QPSJ-based neuromorphic circuits, via QPSJ-based charge islands coupled together to generate non-linear charge dynamics that result in a large number of programmable weights or non-volatile memory states. Notably, we show that these weights are a function of the timing and frequency of the input spiking signals and can be programmed using a small number of DC voltage bias signals, therefore exhibiting spike-timing and rate dependent plasticity, which are mechanisms to realize learning in neuromorphic circuits.
- Published
- 2021
8. TiN/Ti/HfO2/TiN memristive devices for neuromorphic computing: from synaptic plasticity to stochastic resonance.
- Author
-
Maldonado, David, Cantudo, Antonio, Perez, Eduardo, Romero-Zaliz, Rocio, Quesada, Emilio Perez-Bosch, Mahadevaiah, Mamathamba Kalishettyhalli, Jimenez-Molinos, Francisco, Wenger, Christian, and Roldan, Juan Bautista
- Subjects
STOCHASTIC resonance ,NEUROPLASTICITY ,TITANIUM nitride ,DEPENDENCY (Psychology) - Abstract
We characterize TiN/Ti/HfO2/TiN memristive devices for neuromorphic computing. We analyze dierent features that allow the devices to mimic biological synapses and present the models to reproduce analytically some of the data measured. In particular, we have measured the spike timing dependent plasticity behavior in our devices and later on we have modeled it. The spike timing dependent plasticity model was implemented as the learning rule of a spiking neural network that was trained to recognize the MNIST dataset. Variability is implemented and its influence on the network recognition accuracy is considered accounting for the number of neurons in the network and the number of training epochs. Finally, stochastic resonance is studied as another synaptic feature. It is shown that this eect is important and greatly depends on the noise statistical characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. TiN/Ti/HfO2/TiN memristive devices for neuromorphic computing: from synaptic plasticity to stochastic resonance
- Author
-
David Maldonado, Antonio Cantudo, Eduardo Perez, Rocio Romero-Zaliz, Emilio Perez-Bosch Quesada, Mamathamba Kalishettyhalli Mahadevaiah, Francisco Jimenez-Molinos, Christian Wenger, and Juan Bautista Roldan
- Subjects
resistive switching devices ,neuromorphic computing ,synaptic behavior ,spike timing dependent plasticity ,stochastic resonance ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
We characterize TiN/Ti/HfO2/TiN memristive devices for neuromorphic computing. We analyze different features that allow the devices to mimic biological synapses and present the models to reproduce analytically some of the data measured. In particular, we have measured the spike timing dependent plasticity behavior in our devices and later on we have modeled it. The spike timing dependent plasticity model was implemented as the learning rule of a spiking neural network that was trained to recognize the MNIST dataset. Variability is implemented and its influence on the network recognition accuracy is considered accounting for the number of neurons in the network and the number of training epochs. Finally, stochastic resonance is studied as another synaptic feature. It is shown that this effect is important and greatly depends on the noise statistical characteristics.
- Published
- 2023
- Full Text
- View/download PDF
10. Continual learning with hebbian plasticity in sparse and predictive coding networks: a survey and perspective
- Author
-
Ali Safa
- Subjects
spiking neural network ,snn ,spike timing dependent plasticity ,STDP ,Hebbian ,continual learning ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Recently, the use of bio-inspired learning techniques such as Hebbian learning and its closely-related spike-timing-dependent plasticity (STDP) variant have drawn significant attention for the design of compute-efficient AI systems that can continuously learn on-line at the edge. A key differentiating factor regarding this emerging class of neuromorphic continual learning system lies in the fact that learning must be carried using a data stream received in its natural order, as opposed to conventional gradient-based offline training, where a static training dataset is assumed available a priori and randomly shuffled to make the training set independent and identically distributed (i.i.d). In contrast, the emerging class of neuromorphic CL systems covered in this survey must learn to integrate new information on the fly in a non-i.i.d manner, which makes these systems subject to catastrophic forgetting. In order to build the next generation of neuromorphic AI systems that can continuously learn at the edge, a growing number of research groups are studying the use of sparse and predictive Coding (PC)-based Hebbian neural network architectures and the related spiking neural networks (SNNs) equipped with STDP learning. However, since this research field is still emerging, there is a need for providing a holistic view of the different approaches proposed in the literature so far. To this end, this survey covers a number of recent works in the field of neuromorphic CL based on state-of-the-art sparse and PC technology; provides background theory to help interested researchers quickly learn the key concepts; and discusses important future research questions in light of the different works covered in this paper. It is hoped that this survey will contribute towards future research in the field of neuromorphic CL.
- Published
- 2024
- Full Text
- View/download PDF
11. Heterogeneous recurrent spiking neural network for spatio-temporal classification.
- Author
-
Chakraborty, Biswadeep and Mukhopadhyay, Saibal
- Subjects
ARTIFICIAL neural networks ,RECURRENT neural networks ,ARTIFICIAL intelligence - Abstract
Spiking Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence. Although recent SNNs trained with supervised backpropagation show classification accuracy comparable to deep networks, the performance of unsupervised learning-based SNNs remains much lower. This paper presents a heterogeneous recurrent spiking neural network (HRSNN) with unsupervised learning for spatio-temporal classification of video activity recognition tasks on RGB (KTH, UCF11, UCF101) and event-based datasets (DVS128 Gesture). We observed an accuracy of 94.32%for the KTHdataset, 79.58%and 77.53%for theUCF11 and UCF101 datasets, respectively, and an accuracy of 96.54% on the event-based DVS Gesture dataset using the novel unsupervised HRSNN model. The key novelty of the HRSNN is that the recurrent layer in HRSNN consists of heterogeneous neurons with varying firing/relaxation dynamics, and they are trained via heterogeneous spike-time-dependent-plasticity (STDP) with varying learning dynamics for each synapse. We show that this novel combination of heterogeneity in architecture and learning method outperforms current homogeneous spiking neural networks. We further show that HRSNN can achieve similar performance to state-of-the-art backpropagation trained supervised SNN, but with less computation (fewer neurons and sparse connection) and less training data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Heterogeneous recurrent spiking neural network for spatio-temporal classification
- Author
-
Biswadeep Chakraborty and Saibal Mukhopadhyay
- Subjects
spiking neural network (SNN) ,action detection and recognition ,spike timing dependent plasticity ,heterogeneity ,unsupervised learning ,Bayesian Optimization (BO) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Spiking Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence. Although recent SNNs trained with supervised backpropagation show classification accuracy comparable to deep networks, the performance of unsupervised learning-based SNNs remains much lower. This paper presents a heterogeneous recurrent spiking neural network (HRSNN) with unsupervised learning for spatio-temporal classification of video activity recognition tasks on RGB (KTH, UCF11, UCF101) and event-based datasets (DVS128 Gesture). We observed an accuracy of 94.32% for the KTH dataset, 79.58% and 77.53% for the UCF11 and UCF101 datasets, respectively, and an accuracy of 96.54% on the event-based DVS Gesture dataset using the novel unsupervised HRSNN model. The key novelty of the HRSNN is that the recurrent layer in HRSNN consists of heterogeneous neurons with varying firing/relaxation dynamics, and they are trained via heterogeneous spike-time-dependent-plasticity (STDP) with varying learning dynamics for each synapse. We show that this novel combination of heterogeneity in architecture and learning method outperforms current homogeneous spiking neural networks. We further show that HRSNN can achieve similar performance to state-of-the-art backpropagation trained supervised SNN, but with less computation (fewer neurons and sparse connection) and less training data.
- Published
- 2023
- Full Text
- View/download PDF
13. Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout.
- Author
-
Das, Anup, Pradhapan, Paruthi, Groenendaal, Willemijn, Adiraju, Prathyusha, Rajan, Raj Thilak, Catthoor, Francky, Schaafsma, Siebren, Krichmar, Jeffrey L, Dutt, Nikil, and Van Hoof, Chris
- Subjects
Neurons ,Humans ,Electrocardiography ,Probability ,Action Potentials ,Heart Rate ,Neuronal Plasticity ,Algorithms ,Unsupervised Machine Learning ,Wearable Electronic Devices ,Electrocardiogram ,Fuzzy c-Means clustering ,Homeostatic plasticity ,Liquid state machine ,Spike timing dependent plasticity ,Spiking neural networks ,cs.NE ,cs.LG ,Artificial Intelligence & Image Processing - Abstract
Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.
- Published
- 2018
14. Real-time execution of SNN models with synaptic plasticity for handwritten digit recognition on SIMD hardware
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria Electrònica, Universitat Politècnica de Catalunya. IS2- Sensors Intel·ligents i Sistemes Integrats, Vallejo Mancero, Bernardo Javier, Madrenas Boadas, Jordi, Zapata, Mireya, Universitat Politècnica de Catalunya. Departament d'Enginyeria Electrònica, Universitat Politècnica de Catalunya. IS2- Sensors Intel·ligents i Sistemes Integrats, Vallejo Mancero, Bernardo Javier, Madrenas Boadas, Jordi, and Zapata, Mireya
- Abstract
Recent advancements in neuromorphic computing have led to the development of hardware architectures inspired by Spiking Neural Networks (SNNs) to emulate the efficiency and parallel processing capabilities of the human brain. This work focuses on testing the HEENS architecture, specifically designed for high parallel processing and biological realism in SNN emulation, implemented on a ZYNQ family FPGA. The study applies this architecture to the classification of digits using the well-known MNIST database. The image resolutions were adjusted to match HEENS' processing capacity. Results were compared with existing work, demonstrating HEENS' performance comparable to other solutions. This study highlights the importance of balancing accuracy and efficiency in the execution of applications. HEENS offers a flexible solution for SNN emulation, allowing for the implementation of programmable neural and synaptic models. It encourages the exploration of novel algorithms and network architectures, providing an alternative for real-time processing with efficient energy consumption., Postprint (published version)
- Published
- 2024
15. Multilayer Photonic Spiking Neural Networks: Generalized Supervised Learning Algorithm and Network Optimization.
- Author
-
Fu, Chentao, Xiang, Shuiying, Han, Yanan, Song, Ziwei, and Hao, Yue
- Subjects
MACHINE learning ,SUPERVISED learning ,SURFACE emitting lasers ,MATHEMATICAL optimization ,BREAST cancer ,PROBLEM solving - Abstract
We propose a generalized supervised learning algorithm for multilayer photonic spiking neural networks (SNNs) by combining the spike-timing dependent plasticity (STDP) rule and the gradient descent mechanism. A vertical-cavity surface-emitting laser with an embedded saturable absorber (VCSEL-SA) is employed as a photonic leaky-integrate-and-fire (LIF) neuron. The temporal coding strategy is employed to transform information into the precise firing time. With the modified supervised learning algorithm, the trained multilayer photonic SNN successfully solves the XOR problem and performs well on the Iris and Wisconsin breast cancer datasets. This indicates that a generalized supervised learning algorithm is realized for multilayer photonic SNN. In addition, network optimization is performed by considering different network sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Model reduction for stochastic CaMKII reaction kinetics in synapses by graph-constrained correlation dynamics.
- Author
-
Johnson, Todd, Bartol, Tom, Sejnowski, Terrence, and Mjolsness, Eric
- Subjects
Synapses ,Calcium ,Calmodulin ,Probability ,Kinetics ,Algorithms ,Models ,Neurological ,Models ,Chemical ,Calcium-Calmodulin-Dependent Protein Kinase Type 2 ,Molecular Dynamics Simulation ,model reduction ,stochastic reaction networks ,rule-based modeling ,graph-constrained correlation dynamics ,Boltzmann learning ,CaMKII ,spike timing dependent plasticity ,Models ,Neurological ,Chemical ,Biophysics ,Engineering ,Physical Sciences ,Biological Sciences - Abstract
A stochastic reaction network model of Ca(2+) dynamics in synapses (Pepke et al PLoS Comput. Biol. 6 e1000675) is expressed and simulated using rule-based reaction modeling notation in dynamical grammars and in MCell. The model tracks the response of calmodulin and CaMKII to calcium influx in synapses. Data from numerically intensive simulations is used to train a reduced model that, out of sample, correctly predicts the evolution of interaction parameters characterizing the instantaneous probability distribution over molecular states in the much larger fine-scale models. The novel model reduction method, 'graph-constrained correlation dynamics', requires a graph of plausible state variables and interactions as input. It parametrically optimizes a set of constant coefficients appearing in differential equations governing the time-varying interaction parameters that determine all correlations between variables in the reduced model at any time slice.
- Published
- 2015
17. Spiking neural networks compensate for weight drift in organic neuromorphic device networks
- Author
-
Daniel Felder, John Linkhorst, and Matthias Wessling
- Subjects
neuromorphic computing ,spiking neural network ,spike timing dependent plasticity ,organic electronics ,algorithm-hardware co-design ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Organic neuromorphic devices can accelerate neural networks and integrate with biological systems. Devices based on the biocompatible and conductive polymer PEDOT:PSS are fast, require low amounts of energy and perform well in crossbar simulations. However, parasitic electrochemical reactions lead to self-discharge and the fading of the learned conductance states over time. This limits a neural network’s operating time and requires complex compensation mechanisms. Spiking neural networks (SNNs) take inspiration from biology to implement local and always-on learning. We show that these SNNs can function on organic neuromorphic hardware and compensate for self-discharge by continuously relearning and reinforcing forgotten states. In this work, we use a high-resolution charge transport model to describe the behavior of organic neuromorphic devices and create a computationally efficient surrogate model. By integrating the surrogate model into a Brian 2 simulation, we can describe the behavior of SNNs on organic neuromorphic hardware. A biologically plausible two-layer network for recognizing $28\times28$ pixel MNIST images is trained and observed during self-discharge. The network achieves, for its size, competitive recognition results of up to 82.5%. Building a network with forgetful devices yields superior accuracy during training with 84.5% compared to ideal devices. However, trained networks without active spike-timing-dependent plasticity quickly lose their predictive performance. We show that online learning can keep the performance at a steady level close to the initial accuracy, even for idle rates of up to 90%. This performance is maintained when the output neuron’s labels are not revalidated for up to 24 h. These findings reconfirm the potential of organic neuromorphic devices for brain-inspired computing. Their biocompatibility and the demonstrated adaptability to SNNs open the path towards close integration with multi-electrode arrays, drug-delivery devices, and other bio-interfacing systems as either fully organic or hybrid organic-inorganic systems.
- Published
- 2023
- Full Text
- View/download PDF
18. Toward Learning in Neuromorphic Circuits Based on Quantum Phase Slip Junctions
- Author
-
Ran Cheng, Uday S. Goteti, Harrison Walker, Keith M. Krause, Luke Oeding, and Michael C. Hamilton
- Subjects
quantum phase slip junction ,Josephson junction ,neuromorphic computing ,spike timing dependent plasticity ,unsupervised learning ,coupled synapse networks ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
We explore the use of superconducting quantum phase slip junctions (QPSJs), an electromagnetic dual to Josephson Junctions (JJs), in neuromorphic circuits. These small circuits could serve as the building blocks of neuromorphic circuits for machine learning applications because they exhibit desirable properties such as inherent ultra-low energy per operation, high speed, dense integration, negligible loss, and natural spiking responses. In addition, they have a relatively straight-forward micro/nano fabrication, which shows promise for implementation of an enormous number of lossless interconnections that are required to realize complex neuromorphic systems. We simulate QPSJ-only, as well as hybrid QPSJ + JJ circuits for application in neuromorphic circuits including artificial synapses and neurons, as well as fan-in and fan-out circuits. We also design and simulate learning circuits, where a simplified spike timing dependent plasticity rule is realized to provide potential learning mechanisms. We also take an alternative approach, which shows potential to overcome some of the expected challenges of QPSJ-based neuromorphic circuits, via QPSJ-based charge islands coupled together to generate non-linear charge dynamics that result in a large number of programmable weights or non-volatile memory states. Notably, we show that these weights are a function of the timing and frequency of the input spiking signals and can be programmed using a small number of DC voltage bias signals, therefore exhibiting spike-timing and rate dependent plasticity, which are mechanisms to realize learning in neuromorphic circuits.
- Published
- 2021
- Full Text
- View/download PDF
19. Dopaminergic Neuromodulation of Spike Timing Dependent Plasticity in Mature Adult Rodent and Human Cortical Neurons
- Author
-
Emma Louise Louth, Rasmus Langelund Jørgensen, Anders Rosendal Korshoej, Jens Christian Hedemann Sørensen, and Marco Capogna
- Subjects
dopamine ,human cortical slices ,layer 5 pyramidal neurons ,spike timing dependent plasticity ,synaptic inhibition ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Synapses in the cerebral cortex constantly change and this dynamic property regulated by the action of neuromodulators such as dopamine (DA), is essential for reward learning and memory. DA modulates spike-timing-dependent plasticity (STDP), a cellular model of learning and memory, in juvenile rodent cortical neurons. However, it is unknown whether this neuromodulation also occurs at excitatory synapses of cortical neurons in mature adult mice or in humans. Cortical layer V pyramidal neurons were recorded with whole cell patch clamp electrophysiology and an extracellular stimulating electrode was used to induce STDP. DA was either bath-applied or optogenetically released in slices from mice. Classical STDP induction protocols triggered non-hebbian excitatory synaptic depression in the mouse or no plasticity at human cortical synapses. DA reverted long term synaptic depression to baseline in mouse via dopamine 2 type receptors or elicited long term synaptic potentiation in human cortical synapses. Furthermore, when DA was applied during an STDP protocol it depressed presynaptic inhibition in the mouse but not in the human cortex. Thus, DA modulates excitatory synaptic plasticity differently in human vs. mouse cortex. The data strengthens the importance of DA in gating cognition in humans, and may inform on therapeutic interventions to recover brain function from diseases.
- Published
- 2021
- Full Text
- View/download PDF
20. Dopaminergic Neuromodulation of Spike Timing Dependent Plasticity in Mature Adult Rodent and Human Cortical Neurons.
- Author
-
Louth, Emma Louise, Jørgensen, Rasmus Langelund, Korshoej, Anders Rosendal, Sørensen, Jens Christian Hedemann, and Capogna, Marco
- Subjects
PATCH-clamp techniques (Electrophysiology) ,DOPAMINERGIC neurons ,PYRAMIDAL neurons ,RODENTS ,REWARD (Psychology) ,NEURONS - Abstract
Synapses in the cerebral cortex constantly change and this dynamic property regulated by the action of neuromodulators such as dopamine (DA), is essential for reward learning and memory. DA modulates spike-timing-dependent plasticity (STDP), a cellular model of learning and memory, in juvenile rodent cortical neurons. However, it is unknown whether this neuromodulation also occurs at excitatory synapses of cortical neurons in mature adult mice or in humans. Cortical layer V pyramidal neurons were recorded with whole cell patch clamp electrophysiology and an extracellular stimulating electrode was used to induce STDP. DA was either bath-applied or optogenetically released in slices from mice. Classical STDP induction protocols triggered non-hebbian excitatory synaptic depression in the mouse or no plasticity at human cortical synapses. DA reverted long term synaptic depression to baseline in mouse via dopamine 2 type receptors or elicited long term synaptic potentiation in human cortical synapses. Furthermore, when DA was applied during an STDP protocol it depressed presynaptic inhibition in the mouse but not in the human cortex. Thus, DA modulates excitatory synaptic plasticity differently in human vs. mouse cortex. The data strengthens the importance of DA in gating cognition in humans, and may inform on therapeutic interventions to recover brain function from diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Memory stability and synaptic plasticity
- Author
-
Billings, Guy, van Rossum, Mark., and Morris, Richard
- Subjects
612.8 ,synaptic plasticity ,learning ,memory ,Spike timing dependent plasticity - Abstract
Numerous experiments have demonstrated that the activity of neurons can alter the strength of excitatory synapses. This synaptic plasticity is bidirectional and synapses can be strengthened (potentiation) or weakened (depression). Synaptic plasticity offers a mechanism that links the ongoing activity of the brain with persistent physical changes to its structure. For this reason it is widely believed that synaptic plasticity mediates learning and memory. The hypothesis that synapses store memories by modifying their strengths raises an important issue. There should be a balance between the necessity that synapses change frequently, allowing new memories to be stored with high fidelity, and the necessity that synapses retain previously stored information. This is the plasticity stability dilemma. In this thesis the plasticity stability dilemma is studied in the context of the two dominant paradigms of activity dependent synaptic plasticity: Spike timing dependent plasticity (STDP) and long term potentiation and depression (LTP/D). Models of biological synapses are analysed and processes that might ameliorate the plasticity stability dilemma are identified. Two popular existing models of STDP are compared. Through this comparison it is demonstrated that the synaptic weight dynamics of STDP has a large impact upon the retention time of correlation between the weights of a single neuron and a memory. In networks it is shown that lateral inhibition stabilises the synaptic weights and receptive fields. To analyse LTP a novel model of LTP/D is proposed. The model centres on the distinction between early LTP/D, when synaptic modifications are persistent on a short timescale, and late LTP/D when synaptic modifications are persistent on a long timescale. In the context of the hippocampus it is proposed that early LTP/D allows the rapid and continuous storage of short lasting memory traces over a long lasting trace established with late LTP/D. It is shown that this might confer a longer memory retention time than in a system with only one phase of LTP/D. Experimental predictions about the dynamics of amnesia based upon this model are proposed. Synaptic tagging is a phenomenon whereby early LTP can be converted into late LTP, by subsequent induction of late LTP in a separate but nearby input. Synaptic tagging is incorporated into the LTP/D framework. Using this model it is demonstrated that synaptic tagging could lead to the conversion of a short lasting memory trace into a longer lasting trace. It is proposed that this allows the rescue of memory traces that were initially destined for complete decay. When combined with early and late LTP/D iii synaptic tagging might allow the management of hippocampal memory traces, such that not all memories must be stored on the longest, most stable late phase timescale. This lessens the plasticity stability dilemma in the hippocampus, where it has been hypothesised that memory traces must be frequently and vividly formed, but that not all traces demand eventual consolidation at the systems level.
- Published
- 2009
22. Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain
- Author
-
Laxmi R. Iyer, Yansong Chua, and Haizhou Li
- Subjects
spiking neural network ,spike timing dependent plasticity ,N-MNIST dataset ,neuromorphic benchmark ,spike time coding ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
A major characteristic of spiking neural networks (SNNs) over conventional artificial neural networks (ANNs) is their ability to spike, enabling them to use spike timing for coding and efficient computing. In this paper, we assess if neuromorphic datasets recorded from static images are able to evaluate the ability of SNNs to use spike timings in their calculations. We have analyzed N-MNIST, N-Caltech101 and DvsGesture along these lines, but focus our study on N-MNIST. First we evaluate if additional information is encoded in the time domain in a neuromorphic dataset. We show that an ANN trained with backpropagation on frame-based versions of N-MNIST and N-Caltech101 images achieve 99.23 and 78.01% accuracy. These are comparable to the state of the art—showing that an algorithm that purely works on spatial data can classify these datasets. Second we compare N-MNIST and DvsGesture on two STDP algorithms, RD-STDP, that can classify only spatial data, and STDP-tempotron that classifies spatiotemporal data. We demonstrate that RD-STDP performs very well on N-MNIST, while STDP-tempotron performs better on DvsGesture. Since DvsGesture has a temporal dimension, it requires STDP-tempotron, while N-MNIST can be adequately classified by an algorithm that works on spatial data alone. This shows that precise spike timings are not important in N-MNIST. N-MNIST does not, therefore, highlight the ability of SNNs to classify temporal data. The conclusions of this paper open the question—what dataset can evaluate SNN ability to classify temporal data?
- Published
- 2021
- Full Text
- View/download PDF
23. Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain.
- Author
-
Iyer, Laxmi R., Chua, Yansong, and Li, Haizhou
- Subjects
ARTIFICIAL neural networks ,TIME management - Abstract
A major characteristic of spiking neural networks (SNNs) over conventional artificial neural networks (ANNs) is their ability to spike, enabling them to use spike timing for coding and efficient computing. In this paper, we assess if neuromorphic datasets recorded from static images are able to evaluate the ability of SNNs to use spike timings in their calculations. We have analyzed N-MNIST, N-Caltech101 and DvsGesture along these lines, but focus our study on N-MNIST. First we evaluate if additional information is encoded in the time domain in a neuromorphic dataset. We show that an ANN trained with backpropagation on frame-based versions of N-MNIST and N-Caltech101 images achieve 99.23 and 78.01% accuracy. These are comparable to the state of the art—showing that an algorithm that purely works on spatial data can classify these datasets. Second we compare N-MNIST and DvsGesture on two STDP algorithms, RD-STDP, that can classify only spatial data, and STDP-tempotron that classifies spatiotemporal data. We demonstrate that RD-STDP performs very well on N-MNIST, while STDP-tempotron performs better on DvsGesture. Since DvsGesture has a temporal dimension, it requires STDP-tempotron, while N-MNIST can be adequately classified by an algorithm that works on spatial data alone. This shows that precise spike timings are not important in N-MNIST. N-MNIST does not, therefore, highlight the ability of SNNs to classify temporal data. The conclusions of this paper open the question—what dataset can evaluate SNN ability to classify temporal data? [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Multilayer Photonic Spiking Neural Networks: Generalized Supervised Learning Algorithm and Network Optimization
- Author
-
Chentao Fu, Shuiying Xiang, Yanan Han, Ziwei Song, and Yue Hao
- Subjects
photonic spiking neural network ,multilayer spiking neural network ,supervised learning ,vertical-cavity surface-emitting lasers ,spike timing dependent plasticity ,Applied optics. Photonics ,TA1501-1820 - Abstract
We propose a generalized supervised learning algorithm for multilayer photonic spiking neural networks (SNNs) by combining the spike-timing dependent plasticity (STDP) rule and the gradient descent mechanism. A vertical-cavity surface-emitting laser with an embedded saturable absorber (VCSEL-SA) is employed as a photonic leaky-integrate-and-fire (LIF) neuron. The temporal coding strategy is employed to transform information into the precise firing time. With the modified supervised learning algorithm, the trained multilayer photonic SNN successfully solves the XOR problem and performs well on the Iris and Wisconsin breast cancer datasets. This indicates that a generalized supervised learning algorithm is realized for multilayer photonic SNN. In addition, network optimization is performed by considering different network sizes.
- Published
- 2022
- Full Text
- View/download PDF
25. Creation through Polychronization
- Author
-
John Matthias
- Subjects
collaboration ,composition ,polychronization ,spike timing dependent plasticity ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 ,Philosophy (General) ,B1-5802 - Abstract
I have recently suggested that some of the processes involved in the collaborative composition of new music could be analogous to several ideas introduced by Izhikevich in his theory of cortical spiking neurons and simple memory, a process which he calls Polychronization. In the Izhikevich model, the evocation of simple memories is achieved by the sequential re-firing of the same Polychronous group of neurons which was initially created in the cerebral cortex by the sensual stimulus. Each firing event within the group is contingent upon the previous firing event and, in particular, contingent upon the timing of the firings, due to a phenomenon known as “Spike Timing Dependent Plasticity.” I argue in this article that the collaborative creation of new music involves contingencies which form a Polychronous group across space and time which helps to create a temporary shared memorial space between the collaborators.
- Published
- 2017
- Full Text
- View/download PDF
26. Effect of spike-timing dependent plasticity rule choice on memory capacity and form in spiking neural networks
- Author
-
Tatsuno, Masami, Arthur, Derek, University of Lethbridge. Faculty of Arts and Science, Tatsuno, Masami, Arthur, Derek, and University of Lethbridge. Faculty of Arts and Science
- Abstract
The strengthening of synapses between coactivating neurons is believed to be an important underlying mechanism for learning and memory. Hebbian learning of this type has been observed in the brain, with the degree of synaptic strength change dependent on the relative timing of pre-spike arrival and post-spike emission called spike timing dependent plasticity (STDP). Another important feature of learning and memory is the existence of neural spike-timing patterns. Early work by Izhikevich (2006) argued that STDP spontaneously produces structures known as polychronous groups, defined by the network connectivity, that can produce such patterns. However, studies involving STDP face two important issues: how the STDP rule distributes synaptic weights and not knowing what STDP rule is used in the brain. This highlights the importance of understanding the fundamental properties of different STDP rules to determine their effect on the outcome of computational studies. This study focuses on the comparison of two STDP rules, one used by Izhikevich (2006), add-STDP, that produces a bimodal weight distribution, and log-STDP which produces a lognormal weight distribution. The comparison made is between the number of polychronous groups produced and the number of spike-timing patterns, or cell ensembles, found with another detection method that is applicable to experimental data. The number of polychronous groups found with add-STDP was significantly larger as were their sizes and durations. In contrast, the number of cell ensembles found in log-STDP was considerably larger, however, sizes and lifetimes were comparable. Lastly, the activity of cell ensembles in the log-STDP simulations has a non-trivial relationship with the dynamics of synaptic weights in the network, whereas no relationship was found for add-STDP.
- Published
- 2023
27. Controlled Forgetting: Targeted Stimulation and Dopaminergic Plasticity Modulation for Unsupervised Lifelong Learning in Spiking Neural Networks.
- Author
-
Allred, Jason M. and Roy, Kaushik
- Subjects
CONTINUING education ,MEMORY loss ,DATA distribution ,DOPAMINE receptors - Abstract
Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data. For a deployed system that must learn online from an uncontrolled and unknown environment, the ordering of input samples often fails to meet this criterion, making lifelong learning a difficult challenge. We exploit the locality of the unsupervised Spike Timing Dependent Plasticity (STDP) learning rule to target local representations in a Spiking Neural Network (SNN) to adapt to novel information while protecting essential information in the remainder of the SNN from catastrophic forgetting. In our Controlled Forgetting Networks (CFNs), novel information triggers stimulated firing and heterogeneously modulated plasticity, inspired by biological dopamine signals, to cause rapid and isolated adaptation in the synapses of neurons associated with outlier information. This targeting controls the forgetting process in a way that reduces the degradation of accuracy for older tasks while learning new tasks. Our experimental results on the MNIST dataset validate the capability of CFNs to learn successfully over time from an unknown, changing environment, achieving 95.24% accuracy, which we believe is the best unsupervised accuracy ever achieved by a fixed-size, single-layer SNN on a completely disjoint MNIST dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. Designing Behaviour in Bio-inspired Robots Using Associative Topologies of Spiking-Neural-Networks
- Author
-
Cristian Jimenez-Romero, David Sousa-Rodrigues, and Jeffrey Johnson
- Subjects
spiking neurons ,spike timing dependent plasticity ,associative learning ,robotics ,agents simulation ,articial life ,Technology - Abstract
This study explores the design and control of the behaviour of agents and robots using simple circuits of spiking neurons and Spike Timing Dependent Plasticity (STDP) as a mechanism of associative and unsupervised learning. Based on a "reward and punishment" classical conditioning, it is demonstrated that these robots learnt to identify and avoid obstacles as well as to identify and look for rewarding stimuli. Using the simulation and programming environment NetLogo, a software engine for the Integrate and Fire model was developed, which allowed us to monitor in discrete time steps the dynamics of each single neuron, synapse and spike in the proposed neural networks. These spiking neural networks (SNN) served as simple brains for the experimental robots. The Lego Mindstorms robot kit was used for the embodiment of the simulated agents. In this paper the topological building blocks are presented as well as the neural parameters required to reproduce the experiments. This paper summarizes the resulting behaviour as well as the observed dynamics of the neural circuits. The Internet-link to the NetLogo code is included in the annex.
- Published
- 2016
- Full Text
- View/download PDF
29. On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights
- Author
-
Amirreza Yousefzadeh, Evangelos Stromatias, Miguel Soto, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco
- Subjects
spiking neural networks ,spike timing dependent plasticity ,stochastic learning ,feature extraction ,neuromorphic systems ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, the precision used not only penalizes directly the required memory resources, but also the computing, communication, and energy resources. When it comes to hardware engineering, a key question is always to find the minimum number of necessary bits to keep the neurocomputational system working satisfactorily. Here we present some techniques and results obtained when limiting synaptic weights to 1-bit precision, applied to a Spike-Timing-Dependent-Plasticity (STDP) learning rule in Spiking Neural Networks (SNN). We first illustrate the 1-bit synapses STDP operation by replicating a classical biological experiment on visual orientation tuning, using a simple four neuron setup. After this, we apply 1-bit STDP learning to the hidden feature extraction layer of a 2-layer system, where for the second (and output) layer we use already reported SNN classifiers. The systems are tested on two spiking datasets: a Dynamic Vision Sensor (DVS) recorded poker card symbols dataset and a Poisson-distributed spike representation MNIST dataset version. Tests are performed using the in-house MegaSim event-driven behavioral simulator and by implementing the systems on FPGA (Field Programmable Gate Array) hardware.
- Published
- 2018
- Full Text
- View/download PDF
30. Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
- Author
-
Chankyu Lee, Priyadarshini Panda, Gopalakrishnan Srinivasan, and Kaushik Roy
- Subjects
spiking neural network ,convolutional neural network ,spike-based learning rule ,spike timing dependent plasticity ,gradient descent backpropagation ,leaky integrate and fire neuron ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.
- Published
- 2018
- Full Text
- View/download PDF
31. On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights.
- Author
-
Yousefzadeh, Amirreza, Stromatias, Evangelos, Soto, Miguel, Serrano-Gotarredona, Teresa, and Linares-Barranco, Bernabé
- Abstract
In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, the precision used not only penalizes directly the required memory resources, but also the computing, communication, and energy resources. When it comes to hardware engineering, a key question is always to find the minimum number of necessary bits to keep the neurocomputational system working satisfactorily. Here we present some techniques and results obtained when limiting synaptic weights to 1-bit precision, applied to a Spike-Timing-Dependent-Plasticity (STDP) learning rule in Spiking Neural Networks (SNN). We first illustrate the 1-bit synapses STDP operation by replicating a classical biological experiment on visual orientation tuning, using a simple four neuron setup. After this, we apply 1-bit STDP learning to the hidden feature extraction layer of a 2-layer system, where for the second (and output) layer we use already reported SNN classifiers. The systems are tested on two spiking datasets: a Dynamic Vision Sensor (DVS) recorded poker card symbols dataset and a Poisson-distributed spike representation MNIST dataset version. Tests are performed using the in-house MegaSim event-driven behavioral simulator and by implementing the systems on FPGA (Field Programmable Gate Array) hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Creation through Polychronization.
- Author
-
Matthias, John
- Subjects
MUSIC psychology ,CREATION ,NEURONS - Abstract
I have recently suggested that some of the processes involved in the collaborative composition of new music could be analogous to several ideas introduced by Izhikevich in his theory of cortical spiking neurons and simple memory, a process which he calls Polychronization. In the Izhikevich model, the evocation of simple memories is achieved by the sequential re-firing of the same Polychronous group of neurons which was initially created in the cerebral cortex by the sensual stimulus. Each firing event within the group is contingent upon the previous firing event and, in particular, contingent upon the timing of the firings, due to a phenomenon known as "Spike Timing Dependent Plasticity." I argue in this article that the collaborative creation of new music involves contingencies which form a Polychronous group across space and time which helps to create a temporary shared memorial space between the collaborators. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Unsupervised learning by spike timing dependent plasticity in phase change memory (PCM) synapses
- Author
-
Stefano eAmbrogio, Nicola eCiocchini, Mario eLaudato, Valerio eMilo, Agostino ePirovano, Paolo eFantini, and Daniele eIelmini
- Subjects
Neural Network ,cognitive computing ,Memristor ,pattern recognition ,Spike Timing Dependent Plasticity ,phase change memory ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
We present a novel one-transistor/one-resistor (1T1R) synapse for neuromorphic networks, based on phase change memory (PCM) technology. The synapse is capable of spike-timing dependent plasticity (STDP), where gradual potentiation relies on set transition, namely crystallization, in the PCM, while depression is achieved via reset or amorphization of a chalcogenide active volume. STDP characteristics are demonstrated by experiments under variable initial conditions and number of pulses. Finally, we support the applicability of the 1T1R synapse for learning and recognition of visual patterns by simulations of fully connected neuromorphic networks with 2 or 3 layers with high recognition efficiency. The proposed scheme provides a feasible low-power solution for on-line unsupervised machine learning in smart reconfigurable sensors.
- Published
- 2016
- Full Text
- View/download PDF
34. Digital implementation of a virtual insect trained by spike-timing dependent plasticity.
- Author
-
Mazumder, P., Hu, D., Ebong, I., Zhang, X., Xu, Z., and Ferrari, S.
- Subjects
- *
COMPLEMENTARY metal oxide semiconductors , *ARTIFICIAL neural networks , *ALGORITHMS , *VIRTUAL reality , *INTEGRATED circuits - Abstract
Neural network approach to processing have been shown successful and efficient in numerous real world applications. The most successful of this approach are implemented in software but in order to achieve real-time processing similar to that of biological neural networks, hardware implementations of these networks need to be continually improved. This work presents a spiking neural network (SNN) implemented in digital CMOS. The SNN is constructed based on an indirect training algorithm that utilizes spike-timing dependent plasticity (STDP). The SNN is validated by using its outputs to control the motion of a virtual insect. The indirect training algorithm is used to train the SNN to navigate through a terrain with obstacles. The indirect approach is more appropriate for nanoscale CMOS implementation synaptic training since it is getting more difficult to perfectly control matching in CMOS circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
35. A 2-transistor/1-resistor artificial synapse capable of communication and stochastic learning forneuromorphic systems
- Author
-
Zhongqiang eWang, Stefano eAmbrogio, Simone eBalatti, and Daniele eIelmini
- Subjects
Neural Network ,cognitive computing ,Memristor ,pattern recognition ,Spike Timing Dependent Plasticity ,neuromorphic circuits ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Resistive (or memristive) switching devices based on metal oxides find applications in memory, logic and neuromorphic computing systems. Their small area, low power operation, and high functionality meet the challenges of brain-inspired computing aiming at achieving a huge density of active connections (synapses) with low operation power. This work presents a new artificial synapse scheme, consisting of a memristive switch connected to 2 transistors responsible for gating the communication and learning operations. Spike timing dependent plasticity (STDP) is achieved through appropriate shaping of the pre-synaptic and the post synaptic spikes. Experiments with integrated artificial synapses demonstrate STDP with stochastic behavior due to (i) the natural variability of set/reset processes in the nanoscale switch, and (ii) the different response of the switch to a given stimulus depending on the initial state. Experimental results are confirmed by model-based simulations of the memristive switching. Finally, system-level simulations of a 2-layer neural network and a simplified STDP model show random learning and recognition of patterns.
- Published
- 2015
- Full Text
- View/download PDF
36. Unsupervised Learning by Spike Timing Dependent Plasticity in Phase Change Memory (PCM) Synapses.
- Author
-
Ambrogio, Stefano, Ciocchini, Nicola, Laudato, Mario, Milo, Valerio, Pirovano, Agostino, Fantini, Paolo, and Ielmini, Daniele
- Subjects
PHASE change memory ,SYNAPSES ,NEUROPLASTICITY - Abstract
We present a novel one-transistor/one-resistor (1T1R) synapse for neuromorphic networks, based on phase change memory (PCM) technology. The synapse is capable of spike-timing dependent plasticity (STDP), where gradual potentiation relies on set transition, namely crystallization, in the PCM, while depression is achieved via reset or amorphization of a chalcogenide active volume. STDP characteristics are demonstrated by experiments under variable initial conditions and number of pulses. Finally, we support the applicability of the 1T1R synapse for learning and recognition of visual patterns by simulations of fully connected neuromorphic networks with 2 or 3 layers with high recognition efficiency. The proposed scheme provides a feasible low-power solution for on-line unsupervised machine learning in smart reconfigurable sensors. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
37. Rapid, parallel path planning by propagating wavefronts of spiking neural activity
- Author
-
Filip Jan Ponulak and John J Hopfield
- Subjects
Hippocampus ,navigation ,spiking neurons ,parallel processing ,Spike Timing Dependent Plasticity ,Wave propagation ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware.
- Published
- 2013
- Full Text
- View/download PDF
38. A 2-transistor/1-resistor artificial synapse capable of communication and stochastic learning in neuromorphic systems.
- Author
-
Zhongqiang Wang, Ambrogio, Stefano, Balatti, Simone, and Ielmini, Daniele
- Subjects
ISOMETRIC exercise ,BRAIN imaging ,PHYSIOLOGICAL aspects of cognition ,SIGNALING (Psychology) ,COGNITIVE structures - Abstract
Resistive (or memristive) switching devices based on metal oxides find applications in memory, logic and neuromorphic computing systems. Their small area, low power operation, and high functionality meet the challenges of brain-inspired computing aiming at achieving a huge density of active connections (synapses) with low operation power. This work presents a new artificial synapse scheme, consisting of a memristive switch connected to 2 transistors responsible for gating the communication and learning operations. Spike timing dependent plasticity (STDP) is achieved through appropriate shaping of the pre-synaptic and the post synaptic spikes. Experiments with integrated artificial synapses demonstrate STDP with stochastic behavior due to (i) the natural variability of set/reset processes in the nanoscale switch, and (ii) the different response of the switch to a given stimulus depending on the initial state. Experimental results are confirmed by model-based simulations of the memristive switching. Finally, system-level simulations of a 2-layer neural network and a simplified STDP model show random learning and recognition of patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. A model of hippocampal spiking responses to items during learning of a context-dependent task.
- Author
-
Raudies, Florian and Hasselmo, Michael E.
- Subjects
HIPPOCAMPUS (Brain) ,CEREBRAL cortex ,BIOLOGICAL neural networks ,NEURAL circuitry ,NEUROPLASTICITY - Abstract
Single unit recordings in the rat hippocampus have demonstrated shifts in the specificity of spiking activity during learning of a contextual item-reward association task. In this task, rats received reward for responding to different items dependent upon the context an item appeared in, but not dependent upon the location an item appears at. Initially, neurons in the rat hippocampus primarily show firing based on place, but as the rat learns the task this firing became more selective for items.We simulated this effect using a simple circuit model with discrete inputs driving spiking activity representing place and item followed sequentially by a discrete representation of the motor actions involving a response to an item (digging for food) or the movement to a different item (movement to a different pot for food). We implemented spiking replay in the network representing neural activity observed during sharp-wave ripple events, and modified synaptic connections based on a simple representation of spike-timing dependent synaptic plasticity. This simple network was able to consistently learn the context-dependent responses, and transitioned from dominant coding of place to a gradual increase in specificity to items consistent with analysis of the experimental data. In addition, the model showed an increase in specificity toward context. The increase of selectivity in the model is accompanied by an increase in binariness of the synaptic weights for cells that are part of the functional network. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
40. Rapid, parallel path planning by propagating wavefronts of spiking neural activity.
- Author
-
Ponulak, Filip and Hopfield, John J.
- Subjects
BIOLOGICAL neural networks ,WAVEFRONT sensors ,ROBOTICS ,LOGISTICS ,TRANSPORTATION - Abstract
Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
41. SWAT: A Spiking Neural Network Training Algorithm for Classification Problems.
- Author
-
Wade, John J., McDaid, Liam J., Santos, Jose A., and Sayers, Heather M.
- Abstract
This paper presents a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs). SWAT merges the Bienenstock–Cooper—Munro (BCM) learning rule with spike timing dependent plasticity (STDP). The STDP/BCM rule yields a unimodal weight distribution where the height of the plasticity window associated with STDP is modulated causing stability after a period of training. The SNN uses a single training neuron in the training phase where data associated with all classes is passed to this neuron. The rule then maps weights to the classifying output neurons to reflect similarities in the data across the classes. The SNN also includes both excitatory and inhibitory facilitating synapses which create a frequency routing capability allowing the information presented to the network to be routed to different hidden layer neurons. A variable neuron threshold level simulates the refractory period. SWAT is initially benchmarked against the nonlinearly separable Iris and Wisconsin Breast Cancer datasets. Results presented show that the proposed training algorithm exhibits a convergence accuracy of 95.5% and 96.2% for the Iris and Wisconsin training sets, respectively, and 95.3% and 96.7% for the testing sets, noise experiments show that SWAT has a good generalization capability. SWAT is also benchmarked using an isolated digit automatic speech recognition (ASR) system where a subset of the TI46 speech corpus is used. Results show that with SWAT as the classifier, the ASR system provides an accuracy of 98.875% for training and 95.25% for testing. [ABSTRACT FROM PUBLISHER]
- Published
- 2010
- Full Text
- View/download PDF
42. AN ADAPTIVE VISUAL NEURONAL MODEL IMPLEMENTING COMPETITIVE, TEMPORALLY ASYMMETRIC HEBBIAN LEARNING.
- Author
-
YANG, ZHIJUN, CAMERON, KATHERINE L., MURRAY, ALAN F., and BOONSOBHAK, VASIN
- Subjects
- *
NEURAL circuitry , *VISUAL fields , *DISTRIBUTION (Probability theory) , *NEURONS , *GAUSSIAN distribution , *ALGORITHMS - Abstract
A novel depth-from-motion vision model based on leaky integrate-and-fire (I&F) neurons incorporates the implications of recent neurophysiological findings into an algorithm for object discovery and depth analysis. Pulse-coupled I&F neurons capture the edges in an optical flow field and the associated time of travel of those edges is encoded as the neuron parameters, mainly the time constant of the membrane potential and synaptic weight. Correlations between spikes and their timing thus code depth in the visual field. Neurons have multiple output synapses connecting to neighbouring neurons with an initial Gaussian weight distribution. A temporally asymmetric learning rule is used to adapt the synaptic weights online, during which competitive behaviour emerges between the different input synapses of a neuron. It is shown that the competition mechanism can further improve the model performance. After training, the weights of synapses sourced from a neuron do not display a Gaussian distribution, having adapted to encode features of the scenes to which they have been exposed. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
43. STDP Plasticity in TRN Within Hierarchical Spike Timing Model of Visual Information Processing
- Author
-
Koprinkova-Hristova, Petia, Bocheva, Nadejda, Nedelcheva, Simona, Stefanova, Miroslava, Genova, Bilyana, Kraleva, Radoslava, and Kralev, Velin
- Subjects
Spike timing neural model ,Spike timing dependent plasticity ,Saccade generation ,Visual system ,Decision making - Published
- 2020
44. Controlled Forgetting: Targeted Stimulation and Dopaminergic Plasticity Modulation for Unsupervised Lifelong Learning in Spiking Neural Networks
- Author
-
Kaushik Roy and Jason M. Allred
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,controlled forgetting ,Process (engineering) ,Computer science ,lifelong learning ,02 engineering and technology ,Machine learning ,computer.software_genre ,dopaminergic learning ,Machine Learning (cs.LG) ,lcsh:RC321-571 ,Spiking Neural Networks ,03 medical and health sciences ,0302 clinical medicine ,Learning rule ,0202 electrical engineering, electronic engineering, information engineering ,Neural and Evolutionary Computing (cs.NE) ,Spike Timing Dependent Plasticity ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,continual learning ,Original Research ,Spiking neural network ,Forgetting ,Spike-timing-dependent plasticity ,business.industry ,General Neuroscience ,catastrophic forgetting ,Computer Science - Neural and Evolutionary Computing ,Stochastic gradient descent ,Outlier ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,stability-plasticity dilemma ,computer ,030217 neurology & neurosurgery ,MNIST database ,Neuroscience - Abstract
Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data. For a deployed system that must learn online from an uncontrolled and unknown environment, the ordering of input samples often fails to meet this criterion, making lifelong learning a difficult challenge. We exploit the locality of the unsupervised Spike Timing Dependent Plasticity (STDP) learning rule to target local representations in a Spiking Neural Network (SNN) to adapt to novel information while protecting essential information in the remainder of the SNN from catastrophic forgetting. In our Controlled Forgetting Networks (CFNs), novel information triggers stimulated firing and heterogeneously modulated plasticity, inspired by biological dopamine signals, to cause rapid and isolated adaptation in the synapses of neurons associated with outlier information. This targeting controls the forgetting process in a way that reduces the degradation of accuracy for older tasks while learning new tasks. Our experimental results on the MNIST dataset validate the capability of CFNs to learn successfully over time from an unknown, changing environment, achieving 95.24% accuracy, which we believe is the best unsupervised accuracy ever achieved by a fixed-size, single-layer SNN on a completely disjoint MNIST dataset.
- Published
- 2020
- Full Text
- View/download PDF
45. Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
- Author
-
Gopalakrishnan Srinivasan, Kaushik Roy, Chankyu Lee, and Priyadarshini Panda
- Subjects
Computer science ,convolutional neural network ,02 engineering and technology ,Convolutional neural network ,spiking neural network ,lcsh:RC321-571 ,03 medical and health sciences ,0302 clinical medicine ,Robustness (computer science) ,leaky integrate and fire neuron ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,Spiking neural network ,spike timing dependent plasticity ,business.industry ,Spike-timing-dependent plasticity ,gradient descent backpropagation ,General Neuroscience ,Pattern recognition ,Backpropagation ,Neuromorphic engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,spike-based learning rule ,business ,Gradient descent ,030217 neurology & neurosurgery ,Neuroscience - Abstract
Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.
- Published
- 2018
- Full Text
- View/download PDF
46. On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights
- Author
-
Universidad de Sevilla. Departamento de Teoría de la Señal y Comunicaciones, TIC178: Diseño y Test de Circuitos Integrados de Señal Mixta, Yousefzadeh, Amirreza, Stromatias, Evangelos, Soto, Miguel, Serrano Gotarredona, María Teresa, Linares Barranco, Bernabé, Universidad de Sevilla. Departamento de Teoría de la Señal y Comunicaciones, TIC178: Diseño y Test de Circuitos Integrados de Señal Mixta, Yousefzadeh, Amirreza, Stromatias, Evangelos, Soto, Miguel, Serrano Gotarredona, María Teresa, and Linares Barranco, Bernabé
- Abstract
In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, the precision used not only penalizes directly the required memory resources, but also the computing, communication, and energy resources. When it comes to hardware engineering, a key question is always to find the minimum number of necessary bits to keep the neurocomputational system working satisfactorily. Here we present some techniques and results obtained when limiting synaptic weights to 1-bit precision, applied to a Spike-Timing-Dependent-Plasticity (STDP) learning rule in Spiking Neural Networks (SNN). We first illustrate the 1-bit synapses STDP operation by replicating a classical biological experiment on visual orientation tuning, using a simple four neuron setup. After this, we apply 1-bit STDP learning to the hidden feature extraction layer of a 2-layer system, where for the second (and output) layer we use already reported SNN classifiers. The systems are tested on two spiking datasets: a Dynamic Vision Sensor (DVS) recorded poker card symbols dataset and a Poisson-distributed spike representation MNIST dataset version. Tests are performed using the in-house MegaSim event-driven behavioral simulator and by implementing the systems on FPGA (Field Programmable Gate Array) hardware
- Published
- 2018
47. On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights
- Author
-
Ministerio de Economía y Competitividad (España), European Commission, Yousefzadeh, Amirreza, Stromatias, Evangelos, Soto, Miguel, Serrano-Gotarredona, Teresa, Linares-Barranco, Bernabé, Ministerio de Economía y Competitividad (España), European Commission, Yousefzadeh, Amirreza, Stromatias, Evangelos, Soto, Miguel, Serrano-Gotarredona, Teresa, and Linares-Barranco, Bernabé
- Abstract
In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, the precision used not only penalizes directly the required memory resources, but also the computing, communication, and energy resources. When it comes to hardware engineering, a key question is always to find the minimum number of necessary bits to keep the neurocomputational system working satisfactorily. Here we present some techniques and results obtained when limiting synaptic weights to 1-bit precision, applied to a Spike-Timing-Dependent-Plasticity (STDP) learning rule in Spiking Neural Networks (SNN). We first illustrate the 1-bit synapses STDP operation by replicating a classical biological experiment on visual orientation tuning, using a simple four neuron setup. After this, we apply 1-bit STDP learning to the hidden feature extraction layer of a 2-layer system, where for the second (and output) layer we use already reported SNN classifiers. The systems are tested on two spiking datasets: a Dynamic Vision Sensor (DVS) recorded poker card symbols dataset and a Poisson-distributed spike representation MNIST dataset version. Tests are performed using the in-house MegaSim event-driven behavioral simulator and by implementing the systems on FPGA (Field Programmable Gate Array) hardware
- Published
- 2018
48. Is Neuromorphic MNIST neuromorphic? Analyzing the discriminative power of neuromorphic datasets in the time domain
- Author
-
Yansong Chua, Haizhou Li, and Laxmi R Iyer
- Subjects
FOS: Computer and information sciences ,Computer science ,lcsh:RC321-571 ,spiking neural network ,Discriminative model ,Neural and Evolutionary Computing (cs.NE) ,neuromorphic benchmark ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,Spiking neural network ,spike timing dependent plasticity ,Artificial neural network ,business.industry ,General Neuroscience ,Computer Science - Neural and Evolutionary Computing ,Pattern recognition ,Backpropagation ,N-MNIST dataset ,Temporal database ,Neuromorphic engineering ,spike time coding ,Spike (software development) ,Artificial intelligence ,business ,MNIST database ,Neuroscience - Abstract
A major characteristic of spiking neural networks (SNNs) over conventional artificial neural networks (ANNs) is their ability to spike, enabling them to use spike timing for coding and efficient computing. In this paper, we assess if neuromorphic datasets recorded from static images are able to evaluate the ability of SNNs to use spike timings in their calculations. We have analyzed N-MNIST, N-Caltech101 and DvsGesture along these lines, but focus our study on N-MNIST. First we evaluate if additional information is encoded in the time domain in a neuromorphic dataset. We show that an ANN trained with backpropagation on frame-based versions of N-MNIST and N-Caltech101 images achieve 99.23 and 78.01% accuracy. These are comparable to the state of the art—showing that an algorithm that purely works on spatial data can classify these datasets. Second we compare N-MNIST and DvsGesture on two STDP algorithms, RD-STDP, that can classify only spatial data, and STDP-tempotron that classifies spatiotemporal data. We demonstrate that RD-STDP performs very well on N-MNIST, while STDP-tempotron performs better on DvsGesture. Since DvsGesture has a temporal dimension, it requires STDP-tempotron, while N-MNIST can be adequately classified by an algorithm that works on spatial data alone. This shows that precise spike timings are not important in N-MNIST. N-MNIST does not, therefore, highlight the ability of SNNs to classify temporal data. The conclusions of this paper open the question—what dataset can evaluate SNN ability to classify temporal data?
- Published
- 2018
- Full Text
- View/download PDF
49. A compact spike-timing-dependent-plasticity circuit for floating gate weight implementation
- Author
-
A. W. Smith, Steve Hall, and L.J. McDaid
- Subjects
Artificial neural network ,Computer science ,business.industry ,Spike-timing-dependent plasticity ,Cognitive Neuroscience ,Transistor ,Spike timing dependent plasticity ,Long-term potentiation ,Plasticity ,Topology ,Computer Science Applications ,law.invention ,MOSFET ,Capacitor ,Artificial Intelligence ,law ,Hardware_INTEGRATEDCIRCUITS ,Artificial intelligence ,business ,Neural networks ,Floating gate ,Block (data storage) ,Electronic circuit - Abstract
Spike timing dependent plasticity (STDP) forms the basis of learning within neural networks. STDP allows for the modification of synaptic weights based upon the relative timing of pre- and post-synaptic spikes. A compact circuit is presented which can implement STDP, including the critical plasticity window, to determine synaptic modification. A physical model to predict the time window for plasticity to occur is formulated and the effects of process variations on the window is analyzed. The STDP circuit is implemented using two dedicated circuit blocks, one for potentiation and one for depression where each block consists of 4 transistors and a polysilicon capacitor. SpectreS simulations of the back-annotated layout of the circuit and experimental results indicate that STDP with biologically plausible critical timing windows over the range from [email protected]?s to 100ms can be implemented. Also a floating gate weight storage capability, with drive circuits, is presented and a detailed analysis correlating weights changes with charging time is given.
- Published
- 2014
- Full Text
- View/download PDF
50. Immunity to Device Variations in a Spiking Neural Network With Memristive Nanodevices
- Author
-
Damien Querlioz, Olivier Bichler, Philippe Dollfus, Christian Gamrat, Institut d'électronique fondamentale (IEF), Université Paris-Sud - Paris 11 (UP11)-Centre National de la Recherche Scientifique (CNRS), Département d'Architectures, Conception et Logiciels Embarqués-LIST (DACLE-LIST), Laboratoire d'Intégration des Systèmes et des Technologies (LIST), Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Technologique (CEA) (DRT (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), and Laboratoire d'Intégration des Systèmes et des Technologies (LIST (CEA))
- Subjects
mem ,neuromorphic ,02 engineering and technology ,Memristor ,law.invention ,Robustness (computer science) ,law ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,unsupervised ,Index Terms—spiking neural networks ,Electrical and Electronic Engineering ,[PHYS]Physics [physics] ,Spiking neural network ,learning ,spike timing dependent plasticity ,Artificial neural network ,Spike-timing-dependent plasticity ,ristive devices ,021001 nanoscience & nanotechnology ,Computer Science Applications ,memristors ,Neuromorphic engineering ,visual_art ,Electronic component ,visual_art.visual_art_medium ,Unsupervised learning ,020201 artificial intelligence & image processing ,0210 nano-technology - Abstract
International audience; —Memristive nanodevices can feature a compact multi-level non-volatile memory function, but are prone to device variability. We propose a novel neural network-based computing paradigm, which exploits their specific physics, and which has virtual immunity to their variability. Memristive devices are used as synapses in a spiking neural network performing unsupervised learning. They learn using a simplified and customized " spike timing dependent plasticity " rule. In the network, neurons' threshold is adjusted following a homeostasis-type rule. We perform system level simulations with an experimentally verified-model of the memristive devices' behavior. They show, on the textbook case of character recognition, that performance can compare with traditional supervised networks of similar complexity. They also show that the system can retain functionality with extreme variations of various memristive devices' parameters (a relative standard dispersion of more than 50% is tolerated on all device parameters), thanks to the robustness of the scheme, its unsupervised nature, and the capability of homeostasis. Additionally the network can adjust to stimuli presented with different coding schemes, is particularly robust to read disturb effects and does not require unrealistic control on the devices' conductance. These results open the way for a novel design approach for ultra-adaptive electronic systems.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.