95 results on '"spike timing dependent plasticity"'
Search Results
2. Consciousness driven Spike Timing Dependent Plasticity
- Author
-
Yadav, Sushant, Chaudhary, Santosh, Kumar, Rajesh, and Nkomozepi, Pilani
- Published
- 2025
- Full Text
- View/download PDF
3. A spiking binary neuron — detector of causal links
- Author
-
Kiselev, Mikhail V., Larionov, Denis Aleksandrovich, and Andrey, Urusov M.
- Subjects
spiking neural network ,binary neuron ,spike timing dependent plasticity ,dopamine-modulated plasticity ,anti-hebbian plasticity ,reinforcement learning ,neuromorphic hardware ,Physics ,QC1-999 - Abstract
Purpose. Causal relationship recognition is a fundamental operation in neural networks aimed at learning behavior, action planning, and inferring external world dynamics. This operation is particularly crucial for reinforcement learning (RL). In the context of spiking neural networks (SNNs), events are represented as spikes emitted by network neurons or input nodes. Detecting causal relationships within these events is essential for effective RL implementation. Methods. This research paper presents a novel approach to realize causal relationship recognition using a simple spiking binary neuron. The proposed method leverages specially designed synaptic plasticity rules, which are both straightforward and efficient. Notably, our approach accounts for the temporal aspects of detected causal links and accommodates the representation of spiking signals as single spikes or tight spike sequences (bursts), as observed in biological brains. Furthermore, this study places a strong emphasis on the hardware-friendliness of the proposed models, ensuring their efficient implementation on modern and future neuroprocessors. Results. Being compared with precise machine learning techniques, such as decision tree algorithms and convolutional neural networks, our neuron demonstrates satisfactory accuracy despite its simplicity. Conclusion. We introduce a multi-neuron structure capable of operating in more complex environments with enhanced accuracy, making it a promising candidate for the advancement of RL applications in SNNs.
- Published
- 2024
- Full Text
- View/download PDF
4. Real-time execution of SNN models with synaptic plasticity for handwritten digit recognition on SIMD hardware.
- Author
-
Vallejo-Mancero, Bernardo, Madrenas, Jordi, and Zapata, Mireya
- Subjects
ARTIFICIAL neural networks ,PROCESS capability ,DATABASES ,PARALLEL processing ,NEUROPLASTICITY - Abstract
Recent advancements in neuromorphic computing have led to the development of hardware architectures inspired by Spiking Neural Networks (SNNs) to emulate the efficiency and parallel processing capabilities of the human brain. This work focuses on testing the HEENS architecture, specifically designed for high parallel processing and biological realism in SNN emulation, implemented on a ZYNQ family FPGA. The study applies this architecture to the classification of digits using the well-known MNIST database. The image resolutions were adjusted to match HEENS' processing capacity. Results were compared with existing work, demonstrating HEENS' performance comparable to other solutions. This study highlights the importance of balancing accuracy and efficiency in the execution of applications. HEENS offers a flexible solution for SNN emulation, allowing for the implementation of programmable neural and synapticmodels. It encourages the exploration of novel algorithms and network architectures, providing an alternative for real-time processing with efficient energy consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Real-time execution of SNN models with synaptic plasticity for handwritten digit recognition on SIMD hardware
- Author
-
Bernardo Vallejo-Mancero, Jordi Madrenas, and Mireya Zapata
- Subjects
HEENS ,neuromorphic hardware ,spiking neural network ,LIF model ,Spike Timing Dependent Plasticity ,MNIST dataset ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Recent advancements in neuromorphic computing have led to the development of hardware architectures inspired by Spiking Neural Networks (SNNs) to emulate the efficiency and parallel processing capabilities of the human brain. This work focuses on testing the HEENS architecture, specifically designed for high parallel processing and biological realism in SNN emulation, implemented on a ZYNQ family FPGA. The study applies this architecture to the classification of digits using the well-known MNIST database. The image resolutions were adjusted to match HEENS' processing capacity. Results were compared with existing work, demonstrating HEENS' performance comparable to other solutions. This study highlights the importance of balancing accuracy and efficiency in the execution of applications. HEENS offers a flexible solution for SNN emulation, allowing for the implementation of programmable neural and synaptic models. It encourages the exploration of novel algorithms and network architectures, providing an alternative for real-time processing with efficient energy consumption.
- Published
- 2024
- Full Text
- View/download PDF
6. Theoretical investigations into principles of topographic map formation and applications
- Author
-
Gale, Nicholas, Eglen, Stephen, and Franze, Kristian
- Subjects
Chemotaxis ,Datascience ,Dynamical Systems ,EphA3 ,GPU acceleration ,Neural Development ,Retintopy ,Spike timing dependent plasticity ,Superior Colliculus ,Travelling Salesman Problem - Abstract
Topographic maps are ubiquitous brain structures that are fundamental to sensory and higher order systems and are composed of connections between two regions obeying the relationship: physically neighbouring cells in a pre-synaptic region connect to physically neighbouring cells in the post-synaptic region. The developmental principles driving topographic map formation are usually studied within the context of genetic perturbations coupled to high resolution measurements and for these the mouse retinotopic map from retina to superior colliculus has emerged as a useful experimental context. Modelling coupled with genetic perturbation experiments has revealed three key developmental mechanisms: neural activity, chemotaxis, and competition. Some principal challenges in modelling this development include explaining the role of the spatio-temporal structure of patterned neural activity, determining the relative interaction between developmental components, and developing models that are sufficiently computationally efficient that statistical methodologies can be applied to them. Neural activity is a well measured component of retinotopic development and several independent measurement techniques have recorded the existence of spatiotemporally patterned waves at key critical points during development. Existing modelling methodologies reduce this rich spatiotemporal context into a distance dependent correlation function and have subsequently had challenges making quantitative predictions about the effect of manipulating these activity patterns. A neural field theory approach is used to develop mathematical theory which can incorporate these spatiotemporal structures. Bayesian MCMC regression analysis is performed on biological measurements to assess the accuracy of the model and make predictions about the time-scale on which activity operates. This time scale is tuned to the length of an average wave pattern suggesting the system is integrating all information in these waves. The interaction between chemotaxis and neural activity has historically been thought of as linearly independent. A recent study which perturbs both developmental mechanisms simultaneously has suggested that these two are highly stochastic and regular development depends on a critical fine- tuned balance between the two: the heterozygous phenotype was observed to present as both a wild-type and homozygote for different specimens. This hypothesis is tested against the data-set used to generate it. Recreating the entire experimental pipeline in silico with the most parsimonious existing model is able to account for the data without the need to appeal to stochasticity in the mechanisms. A statistical analysis demonstrates that the heterozygous state does not significantly overlap with the heterozygotes and that the stochasticity is likely due to the measurement technique. The existing models are computationally demanding; at least O(n3 ) in the number of retinal cells instantiated by the model. This computational demand renders these classes of models incapable of performing statistical regression and means that their parameters spaces are largely unexplored. A modelling framework which integrates the core operating mechanisms of the model is developed and when implemented on modern GPU computational architectures is able to achieve a near- linear time complexity scaling. This model is demonstrated to capture the explanatory power of existing modelling methodologies. Finally, the role of competition is explored in a dimensional reduction framework: the Elastic Net. The Elastic Net has been used both as a heuristic optimiser (validated on the NP-complete Travel- ling Salesman Problem) and to explain the development of cortical feature maps. The addition of competition is demonstrated to act as a counter-measure to the retinotopic distorting components of the Elastic Net as a cortical map generator. Further analysis demonstrates that competition substantially improves heuristic performance on the Travelling Salesman Problem making it competitive against state of the art solvers when performance is normalised by solution times. The heuristic converges on a length scaling law that is discussed in the context of wire-minimisation problem.
- Published
- 2022
- Full Text
- View/download PDF
7. TiN/Ti/HfO2/TiN memristive devices for neuromorphic computing: from synaptic plasticity to stochastic resonance.
- Author
-
Maldonado, David, Cantudo, Antonio, Perez, Eduardo, Romero-Zaliz, Rocio, Quesada, Emilio Perez-Bosch, Mahadevaiah, Mamathamba Kalishettyhalli, Jimenez-Molinos, Francisco, Wenger, Christian, and Roldan, Juan Bautista
- Subjects
STOCHASTIC resonance ,NEUROPLASTICITY ,TITANIUM nitride ,DEPENDENCY (Psychology) - Abstract
We characterize TiN/Ti/HfO2/TiN memristive devices for neuromorphic computing. We analyze dierent features that allow the devices to mimic biological synapses and present the models to reproduce analytically some of the data measured. In particular, we have measured the spike timing dependent plasticity behavior in our devices and later on we have modeled it. The spike timing dependent plasticity model was implemented as the learning rule of a spiking neural network that was trained to recognize the MNIST dataset. Variability is implemented and its influence on the network recognition accuracy is considered accounting for the number of neurons in the network and the number of training epochs. Finally, stochastic resonance is studied as another synaptic feature. It is shown that this eect is important and greatly depends on the noise statistical characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. TiN/Ti/HfO2/TiN memristive devices for neuromorphic computing: from synaptic plasticity to stochastic resonance
- Author
-
David Maldonado, Antonio Cantudo, Eduardo Perez, Rocio Romero-Zaliz, Emilio Perez-Bosch Quesada, Mamathamba Kalishettyhalli Mahadevaiah, Francisco Jimenez-Molinos, Christian Wenger, and Juan Bautista Roldan
- Subjects
resistive switching devices ,neuromorphic computing ,synaptic behavior ,spike timing dependent plasticity ,stochastic resonance ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
We characterize TiN/Ti/HfO2/TiN memristive devices for neuromorphic computing. We analyze different features that allow the devices to mimic biological synapses and present the models to reproduce analytically some of the data measured. In particular, we have measured the spike timing dependent plasticity behavior in our devices and later on we have modeled it. The spike timing dependent plasticity model was implemented as the learning rule of a spiking neural network that was trained to recognize the MNIST dataset. Variability is implemented and its influence on the network recognition accuracy is considered accounting for the number of neurons in the network and the number of training epochs. Finally, stochastic resonance is studied as another synaptic feature. It is shown that this effect is important and greatly depends on the noise statistical characteristics.
- Published
- 2023
- Full Text
- View/download PDF
9. Supervised learning of spatial features with STDP and homeostasis using Spiking Neural Networks on SpiNNaker.
- Author
-
Davies, Sergio, Gait, Andrew, Rowley, Andrew, and Di Nuovo, Alessandro
- Subjects
- *
ARTIFICIAL neural networks , *PATTERN recognition systems , *SUPERVISED learning , *NETWORK performance , *IMAGE analysis - Abstract
Artificial Neural Networks (ANN) have gained significant popularity thanks to their ability to learn using the well-known backpropagation algorithm. Conversely, Spiking Neural Networks (SNNs), despite having broader capabilities than ANNs, have always posed challenges in the training phase. This paper shows a new method to perform supervised learning on SNNs, using Spike Timing Dependent Plasticity (STDP) and homeostasis, aiming at training the network to identify spatial patterns. Spatial patterns refer to spike patterns without a time component, where all spike events occur simultaneously. The method is tested using the SpiNNaker digital architecture. A SNN is trained to recognise one or multiple patterns and performance metrics are extracted to measure the performance of the network. Some considerations are drawn from the results showing that, in the case of a single trained pattern, the network behaves as the ideal detector, with 100% accuracy in detecting the trained pattern. However, as the number of trained patterns on a single network increases, the accuracy of identification is linked to the similarities between these patterns. This method of training an SNN to detect spatial patterns may be applied to pattern recognition in static images or traffic analysis in computer networks, where each network packet represents a spatial pattern. It will be stipulated that the homeostatic factor may enable the network to detect patterns with some degree of similarity, rather than only perfectly matching patterns. The principles outlined in this article serve as the fundamental building blocks for more complex systems that utilise both spatial and temporal patterns by converting specific features of input signals into spikes. One example of such a system is a computer network packet classifier, tasked with real-time identification of packet streams based on features within the packet content. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Continual learning with hebbian plasticity in sparse and predictive coding networks: a survey and perspective
- Author
-
Ali Safa
- Subjects
spiking neural network ,snn ,spike timing dependent plasticity ,STDP ,Hebbian ,continual learning ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Recently, the use of bio-inspired learning techniques such as Hebbian learning and its closely-related spike-timing-dependent plasticity (STDP) variant have drawn significant attention for the design of compute-efficient AI systems that can continuously learn on-line at the edge. A key differentiating factor regarding this emerging class of neuromorphic continual learning system lies in the fact that learning must be carried using a data stream received in its natural order, as opposed to conventional gradient-based offline training, where a static training dataset is assumed available a priori and randomly shuffled to make the training set independent and identically distributed (i.i.d). In contrast, the emerging class of neuromorphic CL systems covered in this survey must learn to integrate new information on the fly in a non-i.i.d manner, which makes these systems subject to catastrophic forgetting. In order to build the next generation of neuromorphic AI systems that can continuously learn at the edge, a growing number of research groups are studying the use of sparse and predictive Coding (PC)-based Hebbian neural network architectures and the related spiking neural networks (SNNs) equipped with STDP learning. However, since this research field is still emerging, there is a need for providing a holistic view of the different approaches proposed in the literature so far. To this end, this survey covers a number of recent works in the field of neuromorphic CL based on state-of-the-art sparse and PC technology; provides background theory to help interested researchers quickly learn the key concepts; and discusses important future research questions in light of the different works covered in this paper. It is hoped that this survey will contribute towards future research in the field of neuromorphic CL.
- Published
- 2024
- Full Text
- View/download PDF
11. Heterogeneous recurrent spiking neural network for spatio-temporal classification.
- Author
-
Chakraborty, Biswadeep and Mukhopadhyay, Saibal
- Subjects
ARTIFICIAL neural networks ,RECURRENT neural networks ,ARTIFICIAL intelligence - Abstract
Spiking Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence. Although recent SNNs trained with supervised backpropagation show classification accuracy comparable to deep networks, the performance of unsupervised learning-based SNNs remains much lower. This paper presents a heterogeneous recurrent spiking neural network (HRSNN) with unsupervised learning for spatio-temporal classification of video activity recognition tasks on RGB (KTH, UCF11, UCF101) and event-based datasets (DVS128 Gesture). We observed an accuracy of 94.32%for the KTHdataset, 79.58%and 77.53%for theUCF11 and UCF101 datasets, respectively, and an accuracy of 96.54% on the event-based DVS Gesture dataset using the novel unsupervised HRSNN model. The key novelty of the HRSNN is that the recurrent layer in HRSNN consists of heterogeneous neurons with varying firing/relaxation dynamics, and they are trained via heterogeneous spike-time-dependent-plasticity (STDP) with varying learning dynamics for each synapse. We show that this novel combination of heterogeneity in architecture and learning method outperforms current homogeneous spiking neural networks. We further show that HRSNN can achieve similar performance to state-of-the-art backpropagation trained supervised SNN, but with less computation (fewer neurons and sparse connection) and less training data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Heterogeneous recurrent spiking neural network for spatio-temporal classification
- Author
-
Biswadeep Chakraborty and Saibal Mukhopadhyay
- Subjects
spiking neural network (SNN) ,action detection and recognition ,spike timing dependent plasticity ,heterogeneity ,unsupervised learning ,Bayesian Optimization (BO) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Spiking Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence. Although recent SNNs trained with supervised backpropagation show classification accuracy comparable to deep networks, the performance of unsupervised learning-based SNNs remains much lower. This paper presents a heterogeneous recurrent spiking neural network (HRSNN) with unsupervised learning for spatio-temporal classification of video activity recognition tasks on RGB (KTH, UCF11, UCF101) and event-based datasets (DVS128 Gesture). We observed an accuracy of 94.32% for the KTH dataset, 79.58% and 77.53% for the UCF11 and UCF101 datasets, respectively, and an accuracy of 96.54% on the event-based DVS Gesture dataset using the novel unsupervised HRSNN model. The key novelty of the HRSNN is that the recurrent layer in HRSNN consists of heterogeneous neurons with varying firing/relaxation dynamics, and they are trained via heterogeneous spike-time-dependent-plasticity (STDP) with varying learning dynamics for each synapse. We show that this novel combination of heterogeneity in architecture and learning method outperforms current homogeneous spiking neural networks. We further show that HRSNN can achieve similar performance to state-of-the-art backpropagation trained supervised SNN, but with less computation (fewer neurons and sparse connection) and less training data.
- Published
- 2023
- Full Text
- View/download PDF
13. Multilayer Photonic Spiking Neural Networks: Generalized Supervised Learning Algorithm and Network Optimization.
- Author
-
Fu, Chentao, Xiang, Shuiying, Han, Yanan, Song, Ziwei, and Hao, Yue
- Subjects
MACHINE learning ,SUPERVISED learning ,SURFACE emitting lasers ,MATHEMATICAL optimization ,BREAST cancer ,PROBLEM solving - Abstract
We propose a generalized supervised learning algorithm for multilayer photonic spiking neural networks (SNNs) by combining the spike-timing dependent plasticity (STDP) rule and the gradient descent mechanism. A vertical-cavity surface-emitting laser with an embedded saturable absorber (VCSEL-SA) is employed as a photonic leaky-integrate-and-fire (LIF) neuron. The temporal coding strategy is employed to transform information into the precise firing time. With the modified supervised learning algorithm, the trained multilayer photonic SNN successfully solves the XOR problem and performs well on the Iris and Wisconsin breast cancer datasets. This indicates that a generalized supervised learning algorithm is realized for multilayer photonic SNN. In addition, network optimization is performed by considering different network sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Spiking neural networks compensate for weight drift in organic neuromorphic device networks
- Author
-
Daniel Felder, John Linkhorst, and Matthias Wessling
- Subjects
neuromorphic computing ,spiking neural network ,spike timing dependent plasticity ,organic electronics ,algorithm-hardware co-design ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Organic neuromorphic devices can accelerate neural networks and integrate with biological systems. Devices based on the biocompatible and conductive polymer PEDOT:PSS are fast, require low amounts of energy and perform well in crossbar simulations. However, parasitic electrochemical reactions lead to self-discharge and the fading of the learned conductance states over time. This limits a neural network’s operating time and requires complex compensation mechanisms. Spiking neural networks (SNNs) take inspiration from biology to implement local and always-on learning. We show that these SNNs can function on organic neuromorphic hardware and compensate for self-discharge by continuously relearning and reinforcing forgotten states. In this work, we use a high-resolution charge transport model to describe the behavior of organic neuromorphic devices and create a computationally efficient surrogate model. By integrating the surrogate model into a Brian 2 simulation, we can describe the behavior of SNNs on organic neuromorphic hardware. A biologically plausible two-layer network for recognizing $28\times28$ pixel MNIST images is trained and observed during self-discharge. The network achieves, for its size, competitive recognition results of up to 82.5%. Building a network with forgetful devices yields superior accuracy during training with 84.5% compared to ideal devices. However, trained networks without active spike-timing-dependent plasticity quickly lose their predictive performance. We show that online learning can keep the performance at a steady level close to the initial accuracy, even for idle rates of up to 90%. This performance is maintained when the output neuron’s labels are not revalidated for up to 24 h. These findings reconfirm the potential of organic neuromorphic devices for brain-inspired computing. Their biocompatibility and the demonstrated adaptability to SNNs open the path towards close integration with multi-electrode arrays, drug-delivery devices, and other bio-interfacing systems as either fully organic or hybrid organic-inorganic systems.
- Published
- 2023
- Full Text
- View/download PDF
15. Toward Learning in Neuromorphic Circuits Based on Quantum Phase Slip Junctions
- Author
-
Ran Cheng, Uday S. Goteti, Harrison Walker, Keith M. Krause, Luke Oeding, and Michael C. Hamilton
- Subjects
quantum phase slip junction ,Josephson junction ,neuromorphic computing ,spike timing dependent plasticity ,unsupervised learning ,coupled synapse networks ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
We explore the use of superconducting quantum phase slip junctions (QPSJs), an electromagnetic dual to Josephson Junctions (JJs), in neuromorphic circuits. These small circuits could serve as the building blocks of neuromorphic circuits for machine learning applications because they exhibit desirable properties such as inherent ultra-low energy per operation, high speed, dense integration, negligible loss, and natural spiking responses. In addition, they have a relatively straight-forward micro/nano fabrication, which shows promise for implementation of an enormous number of lossless interconnections that are required to realize complex neuromorphic systems. We simulate QPSJ-only, as well as hybrid QPSJ + JJ circuits for application in neuromorphic circuits including artificial synapses and neurons, as well as fan-in and fan-out circuits. We also design and simulate learning circuits, where a simplified spike timing dependent plasticity rule is realized to provide potential learning mechanisms. We also take an alternative approach, which shows potential to overcome some of the expected challenges of QPSJ-based neuromorphic circuits, via QPSJ-based charge islands coupled together to generate non-linear charge dynamics that result in a large number of programmable weights or non-volatile memory states. Notably, we show that these weights are a function of the timing and frequency of the input spiking signals and can be programmed using a small number of DC voltage bias signals, therefore exhibiting spike-timing and rate dependent plasticity, which are mechanisms to realize learning in neuromorphic circuits.
- Published
- 2021
- Full Text
- View/download PDF
16. Toward Learning in Neuromorphic Circuits Based on Quantum Phase Slip Junctions.
- Author
-
Cheng, Ran, Goteti, Uday S., Walker, Harrison, Krause, Keith M., Oeding, Luke, and Hamilton, Michael C.
- Subjects
JOSEPHSON junctions ,MACHINE learning ,SYNAPSES - Abstract
We explore the use of superconducting quantum phase slip junctions (QPSJs), an electromagnetic dual to Josephson Junctions (JJs), in neuromorphic circuits. These small circuits could serve as the building blocks of neuromorphic circuits for machine learning applications because they exhibit desirable properties such as inherent ultra-low energy per operation, high speed, dense integration, negligible loss, and natural spiking responses. In addition, they have a relatively straight-forward micro/nano fabrication, which shows promise for implementation of an enormous number of lossless interconnections that are required to realize complex neuromorphic systems. We simulate QPSJ-only, as well as hybrid QPSJ + JJ circuits for application in neuromorphic circuits including artificial synapses and neurons, as well as fan-in and fan-out circuits. We also design and simulate learning circuits, where a simplified spike timing dependent plasticity rule is realized to provide potential learning mechanisms. We also take an alternative approach, which shows potential to overcome some of the expected challenges of QPSJ-based neuromorphic circuits, via QPSJ-based charge islands coupled together to generate non-linear charge dynamics that result in a large number of programmable weights or non-volatile memory states. Notably, we show that these weights are a function of the timing and frequency of the input spiking signals and can be programmed using a small number of DC voltage bias signals, therefore exhibiting spike-timing and rate dependent plasticity, which are mechanisms to realize learning in neuromorphic circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Dopaminergic Neuromodulation of Spike Timing Dependent Plasticity in Mature Adult Rodent and Human Cortical Neurons
- Author
-
Emma Louise Louth, Rasmus Langelund Jørgensen, Anders Rosendal Korshoej, Jens Christian Hedemann Sørensen, and Marco Capogna
- Subjects
dopamine ,human cortical slices ,layer 5 pyramidal neurons ,spike timing dependent plasticity ,synaptic inhibition ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Synapses in the cerebral cortex constantly change and this dynamic property regulated by the action of neuromodulators such as dopamine (DA), is essential for reward learning and memory. DA modulates spike-timing-dependent plasticity (STDP), a cellular model of learning and memory, in juvenile rodent cortical neurons. However, it is unknown whether this neuromodulation also occurs at excitatory synapses of cortical neurons in mature adult mice or in humans. Cortical layer V pyramidal neurons were recorded with whole cell patch clamp electrophysiology and an extracellular stimulating electrode was used to induce STDP. DA was either bath-applied or optogenetically released in slices from mice. Classical STDP induction protocols triggered non-hebbian excitatory synaptic depression in the mouse or no plasticity at human cortical synapses. DA reverted long term synaptic depression to baseline in mouse via dopamine 2 type receptors or elicited long term synaptic potentiation in human cortical synapses. Furthermore, when DA was applied during an STDP protocol it depressed presynaptic inhibition in the mouse but not in the human cortex. Thus, DA modulates excitatory synaptic plasticity differently in human vs. mouse cortex. The data strengthens the importance of DA in gating cognition in humans, and may inform on therapeutic interventions to recover brain function from diseases.
- Published
- 2021
- Full Text
- View/download PDF
18. Dopaminergic Neuromodulation of Spike Timing Dependent Plasticity in Mature Adult Rodent and Human Cortical Neurons.
- Author
-
Louth, Emma Louise, Jørgensen, Rasmus Langelund, Korshoej, Anders Rosendal, Sørensen, Jens Christian Hedemann, and Capogna, Marco
- Subjects
PATCH-clamp techniques (Electrophysiology) ,DOPAMINERGIC neurons ,PYRAMIDAL neurons ,RODENTS ,REWARD (Psychology) ,NEURONS - Abstract
Synapses in the cerebral cortex constantly change and this dynamic property regulated by the action of neuromodulators such as dopamine (DA), is essential for reward learning and memory. DA modulates spike-timing-dependent plasticity (STDP), a cellular model of learning and memory, in juvenile rodent cortical neurons. However, it is unknown whether this neuromodulation also occurs at excitatory synapses of cortical neurons in mature adult mice or in humans. Cortical layer V pyramidal neurons were recorded with whole cell patch clamp electrophysiology and an extracellular stimulating electrode was used to induce STDP. DA was either bath-applied or optogenetically released in slices from mice. Classical STDP induction protocols triggered non-hebbian excitatory synaptic depression in the mouse or no plasticity at human cortical synapses. DA reverted long term synaptic depression to baseline in mouse via dopamine 2 type receptors or elicited long term synaptic potentiation in human cortical synapses. Furthermore, when DA was applied during an STDP protocol it depressed presynaptic inhibition in the mouse but not in the human cortex. Thus, DA modulates excitatory synaptic plasticity differently in human vs. mouse cortex. The data strengthens the importance of DA in gating cognition in humans, and may inform on therapeutic interventions to recover brain function from diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Memory stability and synaptic plasticity
- Author
-
Billings, Guy, van Rossum, Mark., and Morris, Richard
- Subjects
612.8 ,synaptic plasticity ,learning ,memory ,Spike timing dependent plasticity - Abstract
Numerous experiments have demonstrated that the activity of neurons can alter the strength of excitatory synapses. This synaptic plasticity is bidirectional and synapses can be strengthened (potentiation) or weakened (depression). Synaptic plasticity offers a mechanism that links the ongoing activity of the brain with persistent physical changes to its structure. For this reason it is widely believed that synaptic plasticity mediates learning and memory. The hypothesis that synapses store memories by modifying their strengths raises an important issue. There should be a balance between the necessity that synapses change frequently, allowing new memories to be stored with high fidelity, and the necessity that synapses retain previously stored information. This is the plasticity stability dilemma. In this thesis the plasticity stability dilemma is studied in the context of the two dominant paradigms of activity dependent synaptic plasticity: Spike timing dependent plasticity (STDP) and long term potentiation and depression (LTP/D). Models of biological synapses are analysed and processes that might ameliorate the plasticity stability dilemma are identified. Two popular existing models of STDP are compared. Through this comparison it is demonstrated that the synaptic weight dynamics of STDP has a large impact upon the retention time of correlation between the weights of a single neuron and a memory. In networks it is shown that lateral inhibition stabilises the synaptic weights and receptive fields. To analyse LTP a novel model of LTP/D is proposed. The model centres on the distinction between early LTP/D, when synaptic modifications are persistent on a short timescale, and late LTP/D when synaptic modifications are persistent on a long timescale. In the context of the hippocampus it is proposed that early LTP/D allows the rapid and continuous storage of short lasting memory traces over a long lasting trace established with late LTP/D. It is shown that this might confer a longer memory retention time than in a system with only one phase of LTP/D. Experimental predictions about the dynamics of amnesia based upon this model are proposed. Synaptic tagging is a phenomenon whereby early LTP can be converted into late LTP, by subsequent induction of late LTP in a separate but nearby input. Synaptic tagging is incorporated into the LTP/D framework. Using this model it is demonstrated that synaptic tagging could lead to the conversion of a short lasting memory trace into a longer lasting trace. It is proposed that this allows the rescue of memory traces that were initially destined for complete decay. When combined with early and late LTP/D iii synaptic tagging might allow the management of hippocampal memory traces, such that not all memories must be stored on the longest, most stable late phase timescale. This lessens the plasticity stability dilemma in the hippocampus, where it has been hypothesised that memory traces must be frequently and vividly formed, but that not all traces demand eventual consolidation at the systems level.
- Published
- 2009
20. Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain
- Author
-
Laxmi R. Iyer, Yansong Chua, and Haizhou Li
- Subjects
spiking neural network ,spike timing dependent plasticity ,N-MNIST dataset ,neuromorphic benchmark ,spike time coding ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
A major characteristic of spiking neural networks (SNNs) over conventional artificial neural networks (ANNs) is their ability to spike, enabling them to use spike timing for coding and efficient computing. In this paper, we assess if neuromorphic datasets recorded from static images are able to evaluate the ability of SNNs to use spike timings in their calculations. We have analyzed N-MNIST, N-Caltech101 and DvsGesture along these lines, but focus our study on N-MNIST. First we evaluate if additional information is encoded in the time domain in a neuromorphic dataset. We show that an ANN trained with backpropagation on frame-based versions of N-MNIST and N-Caltech101 images achieve 99.23 and 78.01% accuracy. These are comparable to the state of the art—showing that an algorithm that purely works on spatial data can classify these datasets. Second we compare N-MNIST and DvsGesture on two STDP algorithms, RD-STDP, that can classify only spatial data, and STDP-tempotron that classifies spatiotemporal data. We demonstrate that RD-STDP performs very well on N-MNIST, while STDP-tempotron performs better on DvsGesture. Since DvsGesture has a temporal dimension, it requires STDP-tempotron, while N-MNIST can be adequately classified by an algorithm that works on spatial data alone. This shows that precise spike timings are not important in N-MNIST. N-MNIST does not, therefore, highlight the ability of SNNs to classify temporal data. The conclusions of this paper open the question—what dataset can evaluate SNN ability to classify temporal data?
- Published
- 2021
- Full Text
- View/download PDF
21. Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain.
- Author
-
Iyer, Laxmi R., Chua, Yansong, and Li, Haizhou
- Subjects
ARTIFICIAL neural networks ,TIME management - Abstract
A major characteristic of spiking neural networks (SNNs) over conventional artificial neural networks (ANNs) is their ability to spike, enabling them to use spike timing for coding and efficient computing. In this paper, we assess if neuromorphic datasets recorded from static images are able to evaluate the ability of SNNs to use spike timings in their calculations. We have analyzed N-MNIST, N-Caltech101 and DvsGesture along these lines, but focus our study on N-MNIST. First we evaluate if additional information is encoded in the time domain in a neuromorphic dataset. We show that an ANN trained with backpropagation on frame-based versions of N-MNIST and N-Caltech101 images achieve 99.23 and 78.01% accuracy. These are comparable to the state of the art—showing that an algorithm that purely works on spatial data can classify these datasets. Second we compare N-MNIST and DvsGesture on two STDP algorithms, RD-STDP, that can classify only spatial data, and STDP-tempotron that classifies spatiotemporal data. We demonstrate that RD-STDP performs very well on N-MNIST, while STDP-tempotron performs better on DvsGesture. Since DvsGesture has a temporal dimension, it requires STDP-tempotron, while N-MNIST can be adequately classified by an algorithm that works on spatial data alone. This shows that precise spike timings are not important in N-MNIST. N-MNIST does not, therefore, highlight the ability of SNNs to classify temporal data. The conclusions of this paper open the question—what dataset can evaluate SNN ability to classify temporal data? [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Multilayer Photonic Spiking Neural Networks: Generalized Supervised Learning Algorithm and Network Optimization
- Author
-
Chentao Fu, Shuiying Xiang, Yanan Han, Ziwei Song, and Yue Hao
- Subjects
photonic spiking neural network ,multilayer spiking neural network ,supervised learning ,vertical-cavity surface-emitting lasers ,spike timing dependent plasticity ,Applied optics. Photonics ,TA1501-1820 - Abstract
We propose a generalized supervised learning algorithm for multilayer photonic spiking neural networks (SNNs) by combining the spike-timing dependent plasticity (STDP) rule and the gradient descent mechanism. A vertical-cavity surface-emitting laser with an embedded saturable absorber (VCSEL-SA) is employed as a photonic leaky-integrate-and-fire (LIF) neuron. The temporal coding strategy is employed to transform information into the precise firing time. With the modified supervised learning algorithm, the trained multilayer photonic SNN successfully solves the XOR problem and performs well on the Iris and Wisconsin breast cancer datasets. This indicates that a generalized supervised learning algorithm is realized for multilayer photonic SNN. In addition, network optimization is performed by considering different network sizes.
- Published
- 2022
- Full Text
- View/download PDF
23. Creation through Polychronization
- Author
-
John Matthias
- Subjects
collaboration ,composition ,polychronization ,spike timing dependent plasticity ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 ,Philosophy (General) ,B1-5802 - Abstract
I have recently suggested that some of the processes involved in the collaborative composition of new music could be analogous to several ideas introduced by Izhikevich in his theory of cortical spiking neurons and simple memory, a process which he calls Polychronization. In the Izhikevich model, the evocation of simple memories is achieved by the sequential re-firing of the same Polychronous group of neurons which was initially created in the cerebral cortex by the sensual stimulus. Each firing event within the group is contingent upon the previous firing event and, in particular, contingent upon the timing of the firings, due to a phenomenon known as “Spike Timing Dependent Plasticity.” I argue in this article that the collaborative creation of new music involves contingencies which form a Polychronous group across space and time which helps to create a temporary shared memorial space between the collaborators.
- Published
- 2017
- Full Text
- View/download PDF
24. Active Electroreception in Weakly Electric Fish
- Author
-
Caputi, Angel Ariel
- Published
- 2017
- Full Text
- View/download PDF
25. Controlled Forgetting: Targeted Stimulation and Dopaminergic Plasticity Modulation for Unsupervised Lifelong Learning in Spiking Neural Networks.
- Author
-
Allred, Jason M. and Roy, Kaushik
- Subjects
CONTINUING education ,MEMORY loss ,DATA distribution ,DOPAMINE receptors - Abstract
Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data. For a deployed system that must learn online from an uncontrolled and unknown environment, the ordering of input samples often fails to meet this criterion, making lifelong learning a difficult challenge. We exploit the locality of the unsupervised Spike Timing Dependent Plasticity (STDP) learning rule to target local representations in a Spiking Neural Network (SNN) to adapt to novel information while protecting essential information in the remainder of the SNN from catastrophic forgetting. In our Controlled Forgetting Networks (CFNs), novel information triggers stimulated firing and heterogeneously modulated plasticity, inspired by biological dopamine signals, to cause rapid and isolated adaptation in the synapses of neurons associated with outlier information. This targeting controls the forgetting process in a way that reduces the degradation of accuracy for older tasks while learning new tasks. Our experimental results on the MNIST dataset validate the capability of CFNs to learn successfully over time from an unknown, changing environment, achieving 95.24% accuracy, which we believe is the best unsupervised accuracy ever achieved by a fixed-size, single-layer SNN on a completely disjoint MNIST dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. STDP-Based Unsupervised Spike Pattern Learning in a Photonic Spiking Neural Network With VCSELs and VCSOAs.
- Author
-
Xiang, Shuiying, Zhang, Yahui, Gong, Junkai, Guo, Xingxing, Lin, Lin, and Hao, Yue
- Abstract
We propose a photonic spiking neural network (SNN) consisting of photonic spiking neurons based on vertical-cavity surface-emitting lasers (VCSELs). The photonic spike timing dependent plasticity (STDP) is implemented in a vertical-cavity semiconductor optical amplifier (VCSOA). A versatile computational model of the photonic SNN is presented based on the rate equation models. Through numerical simulation, a spike pattern learning and recognition task is performed based on the photonic STDP. The results show that the post-synaptic spike timing (PST) is eventually converged iteratively to the first spike timing of the input spike pattern via unsupervised learning. Additionally, the convergence rate of the PST can be accelerated for a photonic SNN with more pre-synaptic neurons. The effects of VCSOA parameters on the convergence performance of the unsupervised spike learning are also considered. To the best of our knowledge, such a versatile computational model of photonic SNN for unsupervised learning and recognition of arbitrary spike pattern has not yet been reported, which would contribute one step forward toward numerical implementation of a large-scale energy-efficient photonic SNN, and hence is interesting for neuromorphic photonic systems and spiking information processing. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Lightweight spiking neural network training based on spike timing dependent backpropagation.
- Author
-
Gong, Yu, Chen, Tao, Wang, Shu, Duan, Shukai, and Wang, Lidan
- Subjects
- *
ARTIFICIAL neural networks , *BIOENERGETICS , *WEIGHT training , *ENERGY consumption - Abstract
Spiking neural networks are energy efficient and biological interpretability, communicating through sparse, asynchronous spikes, which makes them suitable for neuromorphic hardware. However, due to the nature of binary weights and spike trains in time-coded binarized spiking neural networks, their forward propagation may cause neurons to not fire spikes, and their backward propagation has non-differentiable problems. Moreover, the current use of deep and complex network structures generates a large number of redundant parameters. Therefore, we need effective methods to improve energy efficiency without reducing accuracy. We propose a dynamic threshold model that can reduce the number of dead neurons. We combine the backpropagation algorithm and the spike timing dependent plasticity algorithm to avoid the non-differentiable problem. We propose a neuron pruning strategy based on adaptive firing time threshold. This pruning strategy prunes 267 neurons in a network of 600 neurons, reducing the network size and obtaining a more compact network structure. The energy efficiency is improved by 0.55 × , while the classification accuracy is lost by 1.1%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Designing Behaviour in Bio-inspired Robots Using Associative Topologies of Spiking-Neural-Networks
- Author
-
Cristian Jimenez-Romero, David Sousa-Rodrigues, and Jeffrey Johnson
- Subjects
spiking neurons ,spike timing dependent plasticity ,associative learning ,robotics ,agents simulation ,articial life ,Technology - Abstract
This study explores the design and control of the behaviour of agents and robots using simple circuits of spiking neurons and Spike Timing Dependent Plasticity (STDP) as a mechanism of associative and unsupervised learning. Based on a "reward and punishment" classical conditioning, it is demonstrated that these robots learnt to identify and avoid obstacles as well as to identify and look for rewarding stimuli. Using the simulation and programming environment NetLogo, a software engine for the Integrate and Fire model was developed, which allowed us to monitor in discrete time steps the dynamics of each single neuron, synapse and spike in the proposed neural networks. These spiking neural networks (SNN) served as simple brains for the experimental robots. The Lego Mindstorms robot kit was used for the embodiment of the simulated agents. In this paper the topological building blocks are presented as well as the neural parameters required to reproduce the experiments. This paper summarizes the resulting behaviour as well as the observed dynamics of the neural circuits. The Internet-link to the NetLogo code is included in the annex.
- Published
- 2016
- Full Text
- View/download PDF
29. On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights
- Author
-
Amirreza Yousefzadeh, Evangelos Stromatias, Miguel Soto, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco
- Subjects
spiking neural networks ,spike timing dependent plasticity ,stochastic learning ,feature extraction ,neuromorphic systems ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, the precision used not only penalizes directly the required memory resources, but also the computing, communication, and energy resources. When it comes to hardware engineering, a key question is always to find the minimum number of necessary bits to keep the neurocomputational system working satisfactorily. Here we present some techniques and results obtained when limiting synaptic weights to 1-bit precision, applied to a Spike-Timing-Dependent-Plasticity (STDP) learning rule in Spiking Neural Networks (SNN). We first illustrate the 1-bit synapses STDP operation by replicating a classical biological experiment on visual orientation tuning, using a simple four neuron setup. After this, we apply 1-bit STDP learning to the hidden feature extraction layer of a 2-layer system, where for the second (and output) layer we use already reported SNN classifiers. The systems are tested on two spiking datasets: a Dynamic Vision Sensor (DVS) recorded poker card symbols dataset and a Poisson-distributed spike representation MNIST dataset version. Tests are performed using the in-house MegaSim event-driven behavioral simulator and by implementing the systems on FPGA (Field Programmable Gate Array) hardware.
- Published
- 2018
- Full Text
- View/download PDF
30. Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
- Author
-
Chankyu Lee, Priyadarshini Panda, Gopalakrishnan Srinivasan, and Kaushik Roy
- Subjects
spiking neural network ,convolutional neural network ,spike-based learning rule ,spike timing dependent plasticity ,gradient descent backpropagation ,leaky integrate and fire neuron ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.
- Published
- 2018
- Full Text
- View/download PDF
31. On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights.
- Author
-
Yousefzadeh, Amirreza, Stromatias, Evangelos, Soto, Miguel, Serrano-Gotarredona, Teresa, and Linares-Barranco, Bernabé
- Abstract
In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, the precision used not only penalizes directly the required memory resources, but also the computing, communication, and energy resources. When it comes to hardware engineering, a key question is always to find the minimum number of necessary bits to keep the neurocomputational system working satisfactorily. Here we present some techniques and results obtained when limiting synaptic weights to 1-bit precision, applied to a Spike-Timing-Dependent-Plasticity (STDP) learning rule in Spiking Neural Networks (SNN). We first illustrate the 1-bit synapses STDP operation by replicating a classical biological experiment on visual orientation tuning, using a simple four neuron setup. After this, we apply 1-bit STDP learning to the hidden feature extraction layer of a 2-layer system, where for the second (and output) layer we use already reported SNN classifiers. The systems are tested on two spiking datasets: a Dynamic Vision Sensor (DVS) recorded poker card symbols dataset and a Poisson-distributed spike representation MNIST dataset version. Tests are performed using the in-house MegaSim event-driven behavioral simulator and by implementing the systems on FPGA (Field Programmable Gate Array) hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Creation through Polychronization.
- Author
-
Matthias, John
- Subjects
MUSIC psychology ,CREATION ,NEURONS - Abstract
I have recently suggested that some of the processes involved in the collaborative composition of new music could be analogous to several ideas introduced by Izhikevich in his theory of cortical spiking neurons and simple memory, a process which he calls Polychronization. In the Izhikevich model, the evocation of simple memories is achieved by the sequential re-firing of the same Polychronous group of neurons which was initially created in the cerebral cortex by the sensual stimulus. Each firing event within the group is contingent upon the previous firing event and, in particular, contingent upon the timing of the firings, due to a phenomenon known as "Spike Timing Dependent Plasticity." I argue in this article that the collaborative creation of new music involves contingencies which form a Polychronous group across space and time which helps to create a temporary shared memorial space between the collaborators. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Unsupervised learning by spike timing dependent plasticity in phase change memory (PCM) synapses
- Author
-
Stefano eAmbrogio, Nicola eCiocchini, Mario eLaudato, Valerio eMilo, Agostino ePirovano, Paolo eFantini, and Daniele eIelmini
- Subjects
Neural Network ,cognitive computing ,Memristor ,pattern recognition ,Spike Timing Dependent Plasticity ,phase change memory ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
We present a novel one-transistor/one-resistor (1T1R) synapse for neuromorphic networks, based on phase change memory (PCM) technology. The synapse is capable of spike-timing dependent plasticity (STDP), where gradual potentiation relies on set transition, namely crystallization, in the PCM, while depression is achieved via reset or amorphization of a chalcogenide active volume. STDP characteristics are demonstrated by experiments under variable initial conditions and number of pulses. Finally, we support the applicability of the 1T1R synapse for learning and recognition of visual patterns by simulations of fully connected neuromorphic networks with 2 or 3 layers with high recognition efficiency. The proposed scheme provides a feasible low-power solution for on-line unsupervised machine learning in smart reconfigurable sensors.
- Published
- 2016
- Full Text
- View/download PDF
34. Memristor emulator with spike-timing-dependent-plasticity.
- Author
-
Babacan, Yunus and Kaçar, Fırat
- Subjects
- *
MEMRISTORS , *EMULATION software , *ANALOG multipliers , *TRANSISTORS , *HYSTERESIS loop , *VERY large scale circuit integration - Abstract
A novel fully floating memristor circuit that accounts for the Spike Timing-Dependent Plasticity (STDP) mechanism is presented in this paper. This proposed circuit does not need any multiplier or extra circuit blocks to provide non-linear characteristics and transistors, which are operated in subthreshold region. We show that memristor circuit exhibits pinched-hysteresis loop in the V-I plane when driven by any sinusoidal voltage source. We demonstrate that the conductance change of the proposed floating memristor exhibits STDP behavior after the application of a pulse-pair train. Finally we present that the time constant of the STDP learning window can be controlled using only one parameter. Proposed memristor emulator circuit that accounts for STDP characteristics is compatible with VLSI systems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
35. Digital implementation of a virtual insect trained by spike-timing dependent plasticity.
- Author
-
Mazumder, P., Hu, D., Ebong, I., Zhang, X., Xu, Z., and Ferrari, S.
- Subjects
- *
COMPLEMENTARY metal oxide semiconductors , *ARTIFICIAL neural networks , *ALGORITHMS , *VIRTUAL reality , *INTEGRATED circuits - Abstract
Neural network approach to processing have been shown successful and efficient in numerous real world applications. The most successful of this approach are implemented in software but in order to achieve real-time processing similar to that of biological neural networks, hardware implementations of these networks need to be continually improved. This work presents a spiking neural network (SNN) implemented in digital CMOS. The SNN is constructed based on an indirect training algorithm that utilizes spike-timing dependent plasticity (STDP). The SNN is validated by using its outputs to control the motion of a virtual insect. The indirect training algorithm is used to train the SNN to navigate through a terrain with obstacles. The indirect approach is more appropriate for nanoscale CMOS implementation synaptic training since it is getting more difficult to perfectly control matching in CMOS circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
36. A 2-transistor/1-resistor artificial synapse capable of communication and stochastic learning forneuromorphic systems
- Author
-
Zhongqiang eWang, Stefano eAmbrogio, Simone eBalatti, and Daniele eIelmini
- Subjects
Neural Network ,cognitive computing ,Memristor ,pattern recognition ,Spike Timing Dependent Plasticity ,neuromorphic circuits ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Resistive (or memristive) switching devices based on metal oxides find applications in memory, logic and neuromorphic computing systems. Their small area, low power operation, and high functionality meet the challenges of brain-inspired computing aiming at achieving a huge density of active connections (synapses) with low operation power. This work presents a new artificial synapse scheme, consisting of a memristive switch connected to 2 transistors responsible for gating the communication and learning operations. Spike timing dependent plasticity (STDP) is achieved through appropriate shaping of the pre-synaptic and the post synaptic spikes. Experiments with integrated artificial synapses demonstrate STDP with stochastic behavior due to (i) the natural variability of set/reset processes in the nanoscale switch, and (ii) the different response of the switch to a given stimulus depending on the initial state. Experimental results are confirmed by model-based simulations of the memristive switching. Finally, system-level simulations of a 2-layer neural network and a simplified STDP model show random learning and recognition of patterns.
- Published
- 2015
- Full Text
- View/download PDF
37. Unsupervised Learning by Spike Timing Dependent Plasticity in Phase Change Memory (PCM) Synapses.
- Author
-
Ambrogio, Stefano, Ciocchini, Nicola, Laudato, Mario, Milo, Valerio, Pirovano, Agostino, Fantini, Paolo, and Ielmini, Daniele
- Subjects
PHASE change memory ,SYNAPSES ,NEUROPLASTICITY - Abstract
We present a novel one-transistor/one-resistor (1T1R) synapse for neuromorphic networks, based on phase change memory (PCM) technology. The synapse is capable of spike-timing dependent plasticity (STDP), where gradual potentiation relies on set transition, namely crystallization, in the PCM, while depression is achieved via reset or amorphization of a chalcogenide active volume. STDP characteristics are demonstrated by experiments under variable initial conditions and number of pulses. Finally, we support the applicability of the 1T1R synapse for learning and recognition of visual patterns by simulations of fully connected neuromorphic networks with 2 or 3 layers with high recognition efficiency. The proposed scheme provides a feasible low-power solution for on-line unsupervised machine learning in smart reconfigurable sensors. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
38. Repetitive magnetic stimulation induces plasticity of excitatory postsynapses on proximal dendrites of cultured mouse CA1 pyramidal neurons.
- Author
-
Lenz, Maximilian, Platschek, Steffen, Priesemann, Viola, Becker, Denise, Willems, Laurent, Ziemann, Ulf, Deller, Thomas, Müller-Dahlhaus, Florian, Jedlicka, Peter, and Vlachos, Andreas
- Subjects
- *
NEUROPLASTICITY , *HIPPOCAMPUS (Brain) , *TRANSCRANIAL magnetic stimulation , *EXCITATORY postsynaptic potential , *IMMUNOSTAINING , *LABORATORY mice - Abstract
Repetitive transcranial magnetic stimulation (rTMS) of the human brain can lead to long-lasting changes in cortical excitability. However, the cellular and molecular mechanisms which underlie rTMS-induced plasticity remain incompletely understood. Here, we used repetitive magnetic stimulation (rMS) of mouse entorhino-hippocampal slice cultures to study rMS-induced plasticity of excitatory postsynapses. By employing whole-cell patch-clamp recordings of CA1 pyramidal neurons, local electrical stimulations, immunostainings for the glutamate receptor subunit GluA1 and compartmental modeling, we found evidence for a preferential potentiation of excitatory synapses on proximal dendrites of CA1 neurons (2-4 h after stimulation). This rMS-induced synaptic potentiation required the activation of voltage-gated sodium channels, L-type voltage-gated calcium channels and N-methyl- d-aspartate-receptors. In view of these findings we propose a cellular model for the preferential strengthening of excitatory synapses on proximal dendrites following rMS in vitro, which is based on a cooperative effect of synaptic glutamatergic transmission and postsynaptic depolarization. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. An Artificial Neural Network Based on the Architecture of the Cerebellum for Behavior Learning.
- Author
-
Iwadate, Kenji, Suzuki, Ikuo, Watanabe, Michiko, Yamamoto, Masahito, and Furukawa, Masashi
- Abstract
In the last decade, artificial intelligence (AI) pervades every aspect of our lives. However, there is a gap between AI-based machine behavior and human in natural communication. The behavior of most AI is determined as a task list generated by engineers, but to obtain high-level intelligence, AI needs the ability to cluster tasks from circumstances and learn a strategy for achieving each task. In this study, we focus on the human brain architecture that gives it the ability to self-organize and generalize sensory information. We propose an Artificial Neural Network (ANN) model based on that architecture. We describe a cerebellum-based ANN model (C-ANN) and verify its capacity to learn from the phototaxic behavior acquisition of a simple two-wheeled robot. As a result, the controller of the robot is self-organized to be simple and able to achieve positive phototaxis. This result suggests that the proposed C-ANN model has the capability of supervised learning. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
40. Computer Simulation of Vestibuloocular Reflex Motor Learning Using a Realistic Cerebellar Cortical Neuronal Network Model.
- Author
-
Inagaki, Kayichiro, Hirata, Yutaka, Blazquez, Pablo M., and Highstein, Stephen M.
- Abstract
The vestibuloocular reflex (VOR) is under adaptive control to stabilize our vision during head movements. It has been suggested that the acute VOR motor learning requires long-term depression (LTD) and potentiation (LTP) at the parallel fiber – Purkinje cell synapses in the cerebellar flocculus. We simulated the VOR motor learning basing upon the LTD and LTP using a realistic cerebellar cortical neuronal network model. In this model, LTD and LTP were induced at the parallel fiber – Purkinje cell synapses by the spike timing dependent plasticity rule, which considers the timing of the spike occurrence in the climbing fiber and the parallel fibers innervating the same Purkinje cell. The model was successful to reproduce the changes in eye movement and Purkinje cell simple spike firing modulation during VOR in the dark after low and high gain VOR motor learning. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
41. Rapid, parallel path planning by propagating wavefronts of spiking neural activity
- Author
-
Filip Jan Ponulak and John J Hopfield
- Subjects
Hippocampus ,navigation ,spiking neurons ,parallel processing ,Spike Timing Dependent Plasticity ,Wave propagation ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware.
- Published
- 2013
- Full Text
- View/download PDF
42. Étude de la plasticité pour des neurones à décharge en interaction
- Author
-
Pascal HELSON, Université Côte d'Azur (UCA), TO Simulate and CAlibrate stochastic models (TOSCA), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut Élie Cartan de Lorraine (IECL), Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), Université Côte d'Azur, Etienne Tanré, Romain Veltz, Simuler et calibrer des modèles stochastiques (TOSCA-POST), and Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Subjects
Plasticité Synaptique ,Temps de Mémoire ,[SCCO.NEUR]Cognitive science/Neuroscience ,Recurrent Neural Network ,Memory Lifetime ,Champ Moyen ,STDP ,Mean Field ,Processus de Markov Déterministes par Morceaux ,Piecewise Deterministic Markov Process ,Multi-échelle ,[SDV.NEU]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC] ,Réseaux de Neurones Récurrents ,[MATH]Mathematics [math] ,Spike Timing Dependent Plasticity ,Multi-scale ,Synaptic Plasticity - Abstract
In this thesis, we study a phenomenon that may be responsible for our memory capacity: the synaptic plasticity. It modifies the links between neurons over time. This phenomenon is stochastic: it is the result of a series of diverse and numerous chemical processes. The aim of the thesis is to propose a model of plasticity for interacting spiking neurons. The main difficulty is to find a model that satisfies the following conditions: it must be both consistent with the biological results of the field and simple enough to be studied mathematically and simulated with a large number of neurons.In a first step, from a rather simple model of plasticity, we study the learning of external signals by a neural network as well as the forgetting time of this signal when the network is subjected to other signals (noise). The mathematical analysis allows us to control the probability to misevaluate the signal. From this, we deduce explicit bounds on the time during which a given signal is kept in memory.Next, we propose a model based on stochastic rules of plasticity as a function of the occurrence time of the neural electrical discharges (Spike Timing Dependent Plasticity, STDP). This model is described by a Piecewise Deterministic Markov Process (PDMP). The long time behaviour of such a neural network is studied using a slow-fast analysis. In particular, sufficient conditions are established under which the process associated with synaptic weights is ergodic. Finally, we make the link between two levels of modelling: the microscopic and the macroscopic approaches. Starting from the dynamics presented at a microscopic level (neuron model and its interaction with other neurons), we derive an asymptotic dynamics which represents the evolution of a typical neuron and its incoming synaptic weights: this is the mean field analysis of the model. We thus condense the information on the dynamics of the weights and that of the neurons into a single equation, that of a typical neuron.; Dans cette thèse nous étudions un phénomène susceptible d’être responsable de notre capacité de mémoire: la plasticité synaptique. C’est le changement des liens entre les neurones au cours du temps. Ce phénomène est stochastique: c’est le résultat d’une suite de divers et nombreux mécanismes chimiques. Le but de la thèse est de proposer un modèle de plasticité pour des neurones à décharge en interaction. La principale difficulté consiste à trouver un modèle qui satisfait les conditions suivantes: ce modèle doit être à la fois cohérent avec les résultats biologiques dans le domaine et assez simple pour être étudié mathématiquement et simulé avec un grand nombre de neurones.Dans un premier temps, à partir d’un modèle assez simple de plasticité, on étudie l’apprentissage de signaux extérieurs par un réseau de neurones ainsi que le temps d’oubli de ce signal lorsque le réseau est soumis à d’autres signaux (bruit). L’analyse mathématique nous permet de contrôler la probabilité d’une mauvaise évaluation du signal. On en déduit un minorant du temps de mémoire du signal en fonction des paramètres.Ensuite, nous proposons un modèle basé sur des règles stochastiques de plasticité fonction du temps d’occurrence des décharges électriques neurales (STDP en anglais). Ce modèle est décrit par un Processus de Markov Déterministe par Morceaux (PDMP en anglais). On étudie le comportement en temps long d’un tel réseau de neurones grâce à une analyse lent-rapide. En particulier, on trouve des conditions suffisantes pour lesquelles le processus associé aux poids synaptiques est ergodique.Enfin, nous faisons le lien entre deux niveaux de modélisation: l’approche microscopique et celle macroscopique. À partir des dynamiques présentées d’un point de vu microscopique (modèle du neurone et son interaction avec les autres neurones), on détermine une dynamique limite qui représente l’évolution d’un neurone typique et de ses poids synaptiques entrant: c’est l’analyse champ moyen du modèle. On condense ainsi l’information sur la dynamique des poids et celle des neurones dans une seule équation, celle d’un neurone typique.
- Published
- 2021
43. A 2-transistor/1-resistor artificial synapse capable of communication and stochastic learning in neuromorphic systems.
- Author
-
Zhongqiang Wang, Ambrogio, Stefano, Balatti, Simone, and Ielmini, Daniele
- Subjects
ISOMETRIC exercise ,BRAIN imaging ,PHYSIOLOGICAL aspects of cognition ,SIGNALING (Psychology) ,COGNITIVE structures - Abstract
Resistive (or memristive) switching devices based on metal oxides find applications in memory, logic and neuromorphic computing systems. Their small area, low power operation, and high functionality meet the challenges of brain-inspired computing aiming at achieving a huge density of active connections (synapses) with low operation power. This work presents a new artificial synapse scheme, consisting of a memristive switch connected to 2 transistors responsible for gating the communication and learning operations. Spike timing dependent plasticity (STDP) is achieved through appropriate shaping of the pre-synaptic and the post synaptic spikes. Experiments with integrated artificial synapses demonstrate STDP with stochastic behavior due to (i) the natural variability of set/reset processes in the nanoscale switch, and (ii) the different response of the switch to a given stimulus depending on the initial state. Experimental results are confirmed by model-based simulations of the memristive switching. Finally, system-level simulations of a 2-layer neural network and a simplified STDP model show random learning and recognition of patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
44. A model of hippocampal spiking responses to items during learning of a context-dependent task.
- Author
-
Raudies, Florian and Hasselmo, Michael E.
- Subjects
HIPPOCAMPUS (Brain) ,CEREBRAL cortex ,BIOLOGICAL neural networks ,NEURAL circuitry ,NEUROPLASTICITY - Abstract
Single unit recordings in the rat hippocampus have demonstrated shifts in the specificity of spiking activity during learning of a contextual item-reward association task. In this task, rats received reward for responding to different items dependent upon the context an item appeared in, but not dependent upon the location an item appears at. Initially, neurons in the rat hippocampus primarily show firing based on place, but as the rat learns the task this firing became more selective for items.We simulated this effect using a simple circuit model with discrete inputs driving spiking activity representing place and item followed sequentially by a discrete representation of the motor actions involving a response to an item (digging for food) or the movement to a different item (movement to a different pot for food). We implemented spiking replay in the network representing neural activity observed during sharp-wave ripple events, and modified synaptic connections based on a simple representation of spike-timing dependent synaptic plasticity. This simple network was able to consistently learn the context-dependent responses, and transitioned from dominant coding of place to a gradual increase in specificity to items consistent with analysis of the experimental data. In addition, the model showed an increase in specificity toward context. The increase of selectivity in the model is accompanied by an increase in binariness of the synaptic weights for cells that are part of the functional network. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
45. A compact spike-timing-dependent-plasticity circuit for floating gate weight implementation.
- Author
-
Smith, A.W., McDaid, L.J., and Hall, S.
- Subjects
- *
PLASTICITY measurements , *FLOATING (Fluid mechanics) , *ARTIFICIAL neural networks , *STATISTICAL correlation , *CAPACITORS - Abstract
Abstract: Spike timing dependent plasticity (STDP) forms the basis of learning within neural networks. STDP allows for the modification of synaptic weights based upon the relative timing of pre- and post-synaptic spikes. A compact circuit is presented which can implement STDP, including the critical plasticity window, to determine synaptic modification. A physical model to predict the time window for plasticity to occur is formulated and the effects of process variations on the window is analyzed. The STDP circuit is implemented using two dedicated circuit blocks, one for potentiation and one for depression where each block consists of 4 transistors and a polysilicon capacitor. SpectreS simulations of the back-annotated layout of the circuit and experimental results indicate that STDP with biologically plausible critical timing windows over the range from 10µs to 100ms can be implemented. Also a floating gate weight storage capability, with drive circuits, is presented and a detailed analysis correlating weights changes with charging time is given. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
46. A neuromorphic VLSI design for spike timing and rate based synaptic plasticity.
- Author
-
Rahimi Azghadi, Mostafa, Al-Sarawi, Said, Abbott, Derek, and Iannella, Nicolangelo
- Subjects
- *
VERY large scale circuit integration , *MATERIAL plasticity , *BIOLOGY experiments , *ELECTRIC circuits , *COMPLEMENTARY metal oxide semiconductors , *COMPUTER simulation , *MONTE Carlo method - Abstract
Abstract: Triplet-based Spike Timing Dependent Plasticity (TSTDP) is a powerful synaptic plasticity rule that acts beyond conventional pair-based STDP (PSTDP). Here, the TSTDP is capable of reproducing the outcomes from a variety of biological experiments, while the PSTDP rule fails to reproduce them. Additionally, it has been shown that the behaviour inherent to the spike rate-based Bienenstock–Cooper–Munro (BCM) synaptic plasticity rule can also emerge from the TSTDP rule. This paper proposes an analogue implementation of the TSTDP rule. The proposed VLSI circuit has been designed using the AMS 0.35 μm CMOS process and has been simulated using design kits for Synopsys and Cadence tools. Simulation results demonstrate how well the proposed circuit can alter synaptic weights according to the timing difference amongst a set of different patterns of spikes. Furthermore, the circuit is shown to give rise to a BCM-like learning rule, which is a rate-based rule. To mimic an implementation environment, a 1000 run Monte Carlo (MC) analysis was conducted on the proposed circuit. The presented MC simulation analysis and the simulation result from fine-tuned circuits show that it is possible to mitigate the effect of process variations in the proof of concept circuit; however, a practical variation aware design technique is required to promise a high circuit performance in a large scale neural network. We believe that the proposed design can play a significant role in future VLSI implementations of both spike timing and rate based neuromorphic learning systems. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
47. Rapid, parallel path planning by propagating wavefronts of spiking neural activity.
- Author
-
Ponulak, Filip and Hopfield, John J.
- Subjects
BIOLOGICAL neural networks ,WAVEFRONT sensors ,ROBOTICS ,LOGISTICS ,TRANSPORTATION - Abstract
Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
48. Nanoscale Electronic Synapses Using Phase Change Devices.
- Author
-
JACKSON, BRYAN L., RAJENDRAN, BIPIN, CORRADO, GREGORY S., BREITWISCH, MATTHEW, BURR, GEOFFREY W., CHEEK, ROGER, GOPALAKRISHNAN, KAILASH, RAOUX, SIMONE, RETTNER, CHARLES T., PADILLA, ALVARO, SCHROTT, ALEX G., SHENOY, ROHIT S., KURDI, BÜLENT N., LAM, CHUNG H., and MODHA, DHARMENDRA S.
- Subjects
NANOELECTROMECHANICAL systems ,COMPUTER storage capacity ,DIGITAL computer simulation ,COMPUTER architecture ,COMPLEMENTARY metal oxide semiconductors ,SYNAPSES - Abstract
The memory capacity, computational power, communication bandwidth, energy consumption, and physical size of the brain all tend to scale with the number of synapses, which outnumber neurons by a factor of 10,000. Although progress in cortical simulations using modern digital computers has been rapid, the essential disparity between the classical von Neumann computer architecture and the computational fabric of the nervous system makes large-scale simulations expensive, power hungry, and time consuming. Over the last three decades, CMOS-based neuromorphic implementations of "electronic cortex" have emerged as an energy efficient alternative for modeling neuronal behavior. However, the key ingredient for electronic implementation of any self-learning system-programmable, plastic Hebbian synapses scalable to biological densities-has remained elusive. We demonstrate the viability of implementing such electronic synapses using nanoscale phase change devices. We introduce novel programming schemes for modulation of device conductance to closely mimic the phenomenon of Spike Timing Dependent Plasticity (STDP) observed biologically, and verify through simulations that such plastic phase change devices should support simple correlative learning in networks of spiking neurons. Our devices, when arranged in a crossbar array architecture, could enable the development of synaptronic systems that approach the density (~10
11 synapses per sq cm) and energy efficiency (consuming ~1pJ per synaptic programming event) of the human brain. [ABSTRACT FROM AUTHOR]- Published
- 2013
- Full Text
- View/download PDF
49. Dynamical Mean-Field Equations for a Neural Network with Spike Timing Dependent Plasticity.
- Author
-
Mayer, Jörg, Ngo, Hong-Viet, and Schuster, Heinz
- Subjects
- *
NEURAL circuitry , *PHENOTYPIC plasticity , *MEAN field theory , *EQUATIONS , *RANDOM noise theory - Abstract
We study the discrete dynamics of a fully connected network of threshold elements interacting via dynamically evolving synapses displaying spike timing dependent plasticity. Dynamical mean-field equations, which become exact in the thermodynamical limit, are derived to study the behavior of the system driven with uncorrelated and correlated Gaussian noise input. We use correlated noise to verify that our model gives account to the fact that correlated noise provides stronger drive for synaptic modification. Further we find that stochastic independent input leads to a noise dependent transition to the coherent state where all neurons fire together, most notably there exists an optimal noise level for the enhancement of synaptic potentiation in our model. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
50. SWAT: A Spiking Neural Network Training Algorithm for Classification Problems.
- Author
-
Wade, John J., McDaid, Liam J., Santos, Jose A., and Sayers, Heather M.
- Abstract
This paper presents a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs). SWAT merges the Bienenstock–Cooper—Munro (BCM) learning rule with spike timing dependent plasticity (STDP). The STDP/BCM rule yields a unimodal weight distribution where the height of the plasticity window associated with STDP is modulated causing stability after a period of training. The SNN uses a single training neuron in the training phase where data associated with all classes is passed to this neuron. The rule then maps weights to the classifying output neurons to reflect similarities in the data across the classes. The SNN also includes both excitatory and inhibitory facilitating synapses which create a frequency routing capability allowing the information presented to the network to be routed to different hidden layer neurons. A variable neuron threshold level simulates the refractory period. SWAT is initially benchmarked against the nonlinearly separable Iris and Wisconsin Breast Cancer datasets. Results presented show that the proposed training algorithm exhibits a convergence accuracy of 95.5% and 96.2% for the Iris and Wisconsin training sets, respectively, and 95.3% and 96.7% for the testing sets, noise experiments show that SWAT has a good generalization capability. SWAT is also benchmarked using an isolated digit automatic speech recognition (ASR) system where a subset of the TI46 speech corpus is used. Results show that with SWAT as the classifier, the ASR system provides an accuracy of 98.875% for training and 95.25% for testing. [ABSTRACT FROM PUBLISHER]
- Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.