744 results on '"spiking neurons"'
Search Results
2. Editorial: 15 years of frontiers in computational neuroscience - computational perception and cognition
- Author
-
Nicolangelo Iannella
- Subjects
perception ,cognition ,spiking neurons ,neural fields ,memory capacity ,manifold untangling ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Published
- 2025
- Full Text
- View/download PDF
3. Editorial: 15 years of frontiers in computational neuroscience - computational perception and cognition.
- Author
-
Iannella, Nicolangelo
- Subjects
RECOLLECTION (Psychology) ,OBJECT recognition (Computer vision) ,MEMORY ,MACHINE learning ,COGNITIVE ability ,COMPUTATIONAL neuroscience ,NEURAL codes ,INTERNEURONS - Abstract
The editorial in Frontiers in Computational Neuroscience discusses the complex nature of cognition and perception in the brain, highlighting the role of computational models in understanding neural processes. Various studies presented in the research topic focus on aspects such as memory capacity, information coding, and neural activity in response to external stimuli. The research aims to provide insights into the neural correlates of cognition and perception, with potential applications in fields like medicine and engineering. The work collected in this research topic serves as a foundation for future studies exploring these topics and their practical implications. [Extracted from the article]
- Published
- 2025
- Full Text
- View/download PDF
4. SNNtrainer3D: Training Spiking Neural Networks Using a User-Friendly Application with 3D Architecture Visualization Capabilities.
- Author
-
Jurj, Sorin Liviu, Nouri, Sina Banasaz, and Strutwolf, Jörg
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,THREE-dimensional imaging ,BIOENERGETICS ,APPLICATION software - Abstract
Spiking Neural Networks have gained significant attention due to their potential for energy efficiency and biological plausibility. However, the reduced number of user-friendly tools for designing, training, and visualizing Spiking Neural Networks hinders widespread adoption. This paper presents the SNNtrainer3D v1.0.0, a novel software application that addresses these challenges. The application provides an intuitive interface for designing Spiking Neural Networks architectures, with features such as dynamic architecture editing, allowing users to add, remove, and edit hidden layers in real-time. A key innovation is the integration of Three.js for three-dimensional visualization of the network structure, enabling users to inspect connections and weights and facilitating a deeper understanding of the model's behavior. The application supports training on the Modified National Institute of Standards and Technology dataset and allows the downloading of trained weights for further use. Moreover, it lays the groundwork for future integration with physical memristor technology, positioning it as a crucial tool for advancing neuromorphic computing research. The advantages of the development process, technology stack, and visualization are discussed. The SNNtrainer3D represents a significant step in making Spiking Neural Networks more accessible, understandable, and easier for Artificial Intelligence researchers and practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Neuromorphic Sensor Based on Force-Sensing Resistors.
- Author
-
Barleanu, Alexandru and Hulea, Mircea
- Subjects
- *
COMPRESSION loads , *DETECTORS , *SHAPE memory alloys - Abstract
This work introduces a neuromorphic sensor (NS) based on force-sensing resistors (FSR) and spiking neurons for robotic systems. The proposed sensor integrates the FSR in the schematic of the spiking neuron in order to make the sensor generate spikes with a frequency that depends on the applied force. The performance of the proposed sensor is evaluated in the control of a SMA-actuated robotic finger by monitoring the force during a steady state when the finger pushes on a tweezer. For comparison purposes, we performed a similar evaluation when the SNN received input from a widely used compression load cell (CLC). The results show that the proposed FSR-based neuromorphic sensor has very good sensitivity to low forces and the function between the spiking rate and the applied force is continuous, with good variation range. However, when compared to the CLC, the response of the NS follows a logarithmic-like function with improved sensitivity for small forces. In addition, the power consumption of NS is 128 µW that is 270 times lower than that of the CLC which needs 3.5 mW to operate. These characteristics make the neuromorphic sensor with FSR suitable for bioinspired control of humanoid robotics, representing a low-power and low-cost alternative to the widely used sensors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Axonal Myelination as a Mechanism for Unsupervised Learning in Spiking Neural Networks
- Author
-
Chaplinskaia, Nadezhda, Bazenkov, Nikolay, Kacprzyk, Janusz, Series Editor, Samsonovich, Alexei V., editor, and Liu, Tingting, editor
- Published
- 2024
- Full Text
- View/download PDF
7. An Extensive Review of the Supervised Learning Algorithms for Spiking Neural Networks
- Author
-
Hussain, Irshed, Thounaojam, Dalton Meitei, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Borah, Malaya Dutta, editor, Laiphrakpam, Dolendro Singh, editor, Auluck, Nitin, editor, and Balas, Valentina Emilia, editor
- Published
- 2024
- Full Text
- View/download PDF
8. A mean-field model of gamma-frequency oscillations in networks of excitatory and inhibitory neurons.
- Author
-
Tahvili, Farzin and Destexhe, Alain
- Abstract
Gamma oscillations are widely seen in the cerebral cortex in different states of the wake-sleep cycle and are thought to play a role in sensory processing and cognition. Here, we study the emergence of gamma oscillations at two levels, in networks of spiking neurons, and a mean-field model. At the network level, we consider two different mechanisms to generate gamma oscillations and show that they are best seen if one takes into account the synaptic delay between neurons. At the mean-field level, we show that, by introducing delays, the mean-field can also produce gamma oscillations. The mean-field matches the mean activity of excitatory and inhibitory populations of the spiking network, as well as their oscillation frequencies, for both mechanisms. This mean-field model of gamma oscillations should be a useful tool to investigate large-scale interactions through gamma oscillations in the brain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Neuromorphic Computing-Based Model for Short-Term Forecasting of Global Horizontal Irradiance in Saudi Arabia
- Author
-
Abdulelah Alharbi, Ubaid Ahmed, Talal Alharbi, and Anzar Mahmood
- Subjects
Solar forecasting ,solar and wind energy ,spiking neurons ,deep-learning ,integrated method ,GHI ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
To tackle environmental and increasing energy demand issues, different energy transition options have been investigated. Solar power has vast resources and is environment-friendly, making it a possible alternative to fossil fuels. However, its integration into the power system poses many challenges because of its uncertain variability, and different deep-learning techniques have been put forward to address its intermittent nature. However, these techniques pose challenges related to high computational overhead and power requirements. Therefore, we present a deep-learning technique leveraging the Leaky Integrated and Fire (LIF) spiking neurons for short-term forecasting of Global Horizontal Irradiance (GHI). The proposed NeuroSpike network consists of a Recurrent Neural Network (RNN) layer, initialized with LIF spiking neurons and stacked with the conventional Long Short-Term Memory (LSTM) layer. The historical GHI data from three distinct locations in the Kingdom of Saudi Arabia (KSA) is used in this study. In the data preprocessing step, a Recursive Feature Elimination with Categorical Boosting (RFE-CatBoost) algorithm is used to select the appropriate features that inherently describe the dataset patterns. The proposed NeuroSpike network is trained on the selected features, and its forecast performance is compared with different benchmark techniques reported in the literature. The results demonstrate that the NeuroSpike network has lower forecasting errors than the techniques compared. Moreover, with RFE-CatBoost algorithm-based feature selection, an improvement of 30.33%, 43.12%, and 23.4% is recorded in the Mean Absolute Error (MAE) of the NeuroSpike network for the datasets of Al-Jouf, Qassim and K.A.CARE sites, respectively. The findings illustrate that the NeuroSpike network’s training becomes more effective and computationally less demanding due to the integration of spiking neurons and the proposed RFE-CatBoost feature selection technique.
- Published
- 2024
- Full Text
- View/download PDF
10. Spiking Neurons with Neural Dynamics Implemented Using Stochastic Memristors.
- Author
-
Song, Lekai, Liu, Pengyu, Pei, Jingfang, Bai, Fan, Liu, Yang, Liu, Songwei, Wen, Yingyi, Ng, Leonard W. T., Pun, Kong‐Pang, Gao, Shuo, Meng, Max Q.‐H., Hasan, Tawfique, and Hu, Guohua
- Subjects
ARTIFICIAL neural networks ,MEMRISTORS ,BORON nitride ,ANOMALY detection (Computer security) ,INTERNET of things - Abstract
Implementing and integrating spiking neurons for neuromorphic hardware realization conforming to spiking neural networks holds great promise in enabling efficient learning and decision‐making. The spiking neurons, however, may lack the spiking dynamics to encode the dynamical information in complex real‐world problems. Herein, using filamentary memristors from solution‐processed hexagonal boron nitride, this study assembles leaky integrate‐and‐fire spiking neurons and, particularly, harnesses the common switching stochasticity feature in the memristors to allow key neural dynamics, including Poisson‐like spiking and adaptation. The neurons, with the dynamics fitted via hardware‐algorithm codesign, suggest a potential in realizing spike‐based neuromorphic hardware capable of handling complex problems. Simulation of an autoencoder for anomaly detection of time‐series real analog and digital data from physical systems is demonstrated, underscoring its promising prospect in applications, especially, at the edges with limited computation resources, for instance, auto‐pilot, manufacturing, wearables, and Internet of things. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Supervised Learning Strategy for Spiking Neurons Based on Their Segmental Running Characteristics.
- Author
-
Gu, Xingjian, Shu, Xin, Yang, Jing, Xu, Yan, Jiang, Haiyan, and Shu, Xiangbo
- Subjects
SUPERVISED learning ,LEARNING strategies ,CONTINUOUS processing - Abstract
Supervised learning of spiking neurons is an effective simulation method to explore the learning mechanism of real neurons. Desired output spike trains are often used as supervised signals to control the synaptic strength adjustment of neurons for precise emission. The goal of supervised learning is also to allow spiking neurons to enter the desired running and firing state. The running process of a spiking neuron is a continuous process, but because of absolute refractory periods, it is regarded as several running segments. Based on the segmental characteristic, a new supervised learning strategy for spiking neurons is proposed to expand the action mode of supervised signals in supervised learning. Desired output spikes are used to actively regulate the running segments and make them more efficient in achieving the desired running and firing state. Supervised signals actively regulate the running process of neurons and are more comprehensively involved in the learning process than simply participating in adjusting synaptic weights. Based on two weight adjustment mechanisms of spiking neurons, two new specific supervised learning methods are proposed. The experimental results obtained using various settings indicate that the two learning methods have higher learning performance, which indicates the effectiveness of the new learning strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Artificial Neurons Using Ag−In−Zn−S/Sericin Peptide‐Based Threshold Switching Memristors for Spiking Neural Networks.
- Author
-
He, Nan, Yan, Jie, Zhang, Zhining, Qin, Haiming, Hu, Ertao, Wang, Xinpeng, Zhang, Hao, Chen, Pu, Xu, Feng, Sheng, Yang, Zhang, Lei, and Tong, Yi
- Subjects
ARTIFICIAL neural networks ,MEMRISTORS ,PATTERN recognition systems ,NEURONS ,THRESHOLD voltage - Abstract
Memristive devices with threshold switching characteristics can be effectively utilized to mimic biological neurons acting as one of the key building blocks for constructing advanced hardware neural networks. In this work, the emulation of leaky integrate‐and‐fire memristive neuron is realized in one single cell with Ag/Ag−In−Zn−S/silk sericin/W architecture without the need for additional auxiliary circuits. The studied devices demonstrate excellent electrical properties, such as stably repeatable threshold switching, concentratedly low threshold voltage (≈0.4 V), and relatively small device‐to‐device variation. In addition, multiple neural features, such as leaky integrate‐and‐fire neuron functionality and strength‐modulated spike frequency characteristic, have been successfully emulated owing to the forming‐free volatile threshold switching effect. The stable volatile threshold switching behaviors and regular firing event may be attributed to the controllable metallic Ag filamentary mechanism. Furthermore, a solid accuracy of 91.44% of the pattern recognition of Modified National Institute of Standards and Technology (MNIST) data is obtained via a trained spiking neural network (SNN) based on the leaky integrate‐and‐fire behavior of sericin‐based device. These achievements shed light on the fact that employing sericin biomaterials has great application potential in advanced neuromorphic computation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Driving Hexapods Through Insect Brain
- Author
-
Arena, Paolo, Cannizzo, Emanuele, Li Noce, Alessia, Patanè, Luca, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Meder, Fabian, editor, Hunt, Alexander, editor, Margheri, Laura, editor, Mura, Anna, editor, and Mazzolai, Barbara, editor
- Published
- 2023
- Full Text
- View/download PDF
14. Frameworks for SNNs: A Review of Data Science-Oriented Software and an Expansion of SpykeTorch
- Author
-
Manna, Davide L., Vicente-Sola, Alex, Kirkland, Paul, Bihl, Trevor J., Di Caterina, Gaetano, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Barbosa, Simone Diniz Junqueira, Editorial Board Member, Chen, Phoebe, Editorial Board Member, Cuzzocrea, Alfredo, Editorial Board Member, Du, Xiaoyong, Editorial Board Member, Kara, Orhun, Editorial Board Member, Liu, Ting, Editorial Board Member, Sivalingam, Krishna M., Editorial Board Member, Slezak, Dominik, Editorial Board Member, Washio, Takashi, Editorial Board Member, Yang, Xiaokang, Editorial Board Member, Yuan, Junsong, Editorial Board Member, Iliadis, Lazaros, editor, Maglogiannis, Ilias, editor, Alonso, Serafin, editor, Jayne, Chrisina, editor, and Pimenidis, Elias, editor
- Published
- 2023
- Full Text
- View/download PDF
15. Spiking Neurons with Neural Dynamics Implemented Using Stochastic Memristors
- Author
-
Lekai Song, Pengyu Liu, Jingfang Pei, Fan Bai, Yang Liu, Songwei Liu, Yingyi Wen, Leonard W. T. Ng, Kong‐Pang Pun, Shuo Gao, Max Q.‐H. Meng, Tawfique Hasan, and Guohua Hu
- Subjects
neural spiking dynamics ,self‐reset threshold switching memristors ,spike‐based neuromorphic computing ,spiking neurons ,switching stochasticity ,Electric apparatus and materials. Electric circuits. Electric networks ,TK452-454.4 ,Physics ,QC1-999 - Abstract
Abstract Implementing and integrating spiking neurons for neuromorphic hardware realization conforming to spiking neural networks holds great promise in enabling efficient learning and decision‐making. The spiking neurons, however, may lack the spiking dynamics to encode the dynamical information in complex real‐world problems. Herein, using filamentary memristors from solution‐processed hexagonal boron nitride, this study assembles leaky integrate‐and‐fire spiking neurons and, particularly, harnesses the common switching stochasticity feature in the memristors to allow key neural dynamics, including Poisson‐like spiking and adaptation. The neurons, with the dynamics fitted via hardware‐algorithm codesign, suggest a potential in realizing spike‐based neuromorphic hardware capable of handling complex problems. Simulation of an autoencoder for anomaly detection of time‐series real analog and digital data from physical systems is demonstrated, underscoring its promising prospect in applications, especially, at the edges with limited computation resources, for instance, auto‐pilot, manufacturing, wearables, and Internet of things.
- Published
- 2024
- Full Text
- View/download PDF
16. Artificial Neurons Using Ag−In−Zn−S/Sericin Peptide‐Based Threshold Switching Memristors for Spiking Neural Networks
- Author
-
Nan He, Jie Yan, Zhining Zhang, Haiming Qin, Ertao Hu, Xinpeng Wang, Hao Zhang, Pu Chen, Feng Xu, Yang Sheng, Lei Zhang, and Yi Tong
- Subjects
Ag−In−Zn−S quantum dot ,memristors ,silk sericin ,spiking neurons ,threshold switching ,Electric apparatus and materials. Electric circuits. Electric networks ,TK452-454.4 ,Physics ,QC1-999 - Abstract
Abstract Memristive devices with threshold switching characteristics can be effectively utilized to mimic biological neurons acting as one of the key building blocks for constructing advanced hardware neural networks. In this work, the emulation of leaky integrate‐and‐fire memristive neuron is realized in one single cell with Ag/Ag−In−Zn−S/silk sericin/W architecture without the need for additional auxiliary circuits. The studied devices demonstrate excellent electrical properties, such as stably repeatable threshold switching, concentratedly low threshold voltage (≈0.4 V), and relatively small device‐to‐device variation. In addition, multiple neural features, such as leaky integrate‐and‐fire neuron functionality and strength‐modulated spike frequency characteristic, have been successfully emulated owing to the forming‐free volatile threshold switching effect. The stable volatile threshold switching behaviors and regular firing event may be attributed to the controllable metallic Ag filamentary mechanism. Furthermore, a solid accuracy of 91.44% of the pattern recognition of Modified National Institute of Standards and Technology (MNIST) data is obtained via a trained spiking neural network (SNN) based on the leaky integrate‐and‐fire behavior of sericin‐based device. These achievements shed light on the fact that employing sericin biomaterials has great application potential in advanced neuromorphic computation.
- Published
- 2023
- Full Text
- View/download PDF
17. The input-dependent variable sampling (I-DEVS) energy-efficient digital neuron implementation method.
- Author
-
Leigh, Alexander J., Heidarpur, Moslem, and Mirhassani, Mitra
- Abstract
A method is proposed by which the power consumption of a biologically detailed digital neuron implementation can be reduced without modification to the digital neuron's hardware architecture and independent of the neuron model. This method results in substantial power savings by causing the neuron to enter a quasi-functional state when low input stimulus is received. This approach is analogous to the function of real biological neurons as they enter a low-activity state for low stimulus. The shifts in neuronal activity created by the novel method allow for the membrane potential to remain uncorrupted over a large domain of input synaptic current, while avoiding unnecessary computations and switching activity. The digital hardware implementation results are presented and discussed, and it is shown that the behaviour of the neuron is unaffected using the novel method. The power consumption of the implemented digital neurons is compared with traditional implementations, and considerable power savings are shown. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Competitive Learning with Spiking Nets and Spike Timing Dependent Plasticity
- Author
-
Huyck, Christian, Erekpaine, Orume, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bramer, Max, editor, and Stahl, Frederic, editor
- Published
- 2022
- Full Text
- View/download PDF
19. Multidimensional Dynamical Systems with Noise : Population Density Techniques for Neuroscience
- Author
-
Osborne, Hugh, Deutz, Lukas, de Kamps, Marc, Crusio, Wim E., Series Editor, Dong, Haidong, Series Editor, Radeke, Heinfried H., Series Editor, Rezaei, Nima, Series Editor, Steinlein, Ortrud, Series Editor, Xiao, Junjie, Series Editor, Giugliano, Michele, editor, Negrello, Mario, editor, and Linaro, Daniele, editor
- Published
- 2022
- Full Text
- View/download PDF
20. Epileptic Seizure Classification Using Spiking Neural Network from EEG Signals
- Author
-
Hussain, Irshed, Thounaojam, Dalton Meitei, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Patgiri, Ripon, editor, Bandyopadhyay, Sivaji, editor, Borah, Malaya Dutta, editor, and Emilia Balas, Valentina, editor
- Published
- 2022
- Full Text
- View/download PDF
21. Online spike-based recognition of digits with ultrafast microlaser neurons
- Author
-
Amir Masominia, Laurie E. Calvet, Simon Thorpe, and Sylvain Barbay
- Subjects
photonic hardware ,temporal coding ,rank-order code ,spiking neurons ,microlasers ,receptive fields ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Classification and recognition tasks performed on photonic hardware-based neural networks often require at least one offline computational step, such as in the increasingly popular reservoir computing paradigm. Removing this offline step can significantly improve the response time and energy efficiency of such systems. We present numerical simulations of different algorithms that utilize ultrafast photonic spiking neurons as receptive fields to allow for image recognition without an offline computing step. In particular, we discuss the merits of event, spike-time and rank-order based algorithms adapted to this system. These techniques have the potential to significantly improve the efficiency and effectiveness of optical classification systems, minimizing the number of spiking nodes required for a given task and leveraging the parallelism offered by photonic hardware.
- Published
- 2023
- Full Text
- View/download PDF
22. Leaky Integrate‐and‐Fire Mechanism in Exciton–Polariton Condensates for Photonic Spiking Neurons.
- Author
-
Tyszka, Krzysztof, Furman, Magdalena, Mirek, Rafał, Król, Mateusz, Opala, Andrzej, Seredyński, Bartłomiej, Suffczyński, Jan, Pacuski, Wojciech, Matuszewski, Michał, Szczytko, Jacek, and Piętka, Barbara
- Subjects
- *
BOSE-Einstein condensation , *PHOTON emission , *POLARITONS , *ARTIFICIAL neural networks , *STIMULATED emission , *PULSED lasers - Abstract
This paper introduces a new approach to neuromorphic photonics in which microcavities exhibiting strong exciton–photon interaction may serve as building blocks of optical spiking neurons. The experimental results demonstrate the intrinsic property of exciton–polaritons to resemble the Leaky Integrate‐and‐Fire (LIF) spiking mechanism. It is shown that exciton–polariton microcavities when non‐resonantly pumped with a pulsed laser exhibit leaky integration due to relaxation of the excitonic reservoir, threshold‐and‐fire mechanism due to transition to Bose–Einstein Condensate (BEC), and resetting due to stimulated emission of photons. These effects, evidenced in photoluminescence characteristics, arise within sub‐ns timescales. The presented approach provides means for ultrafast processing of spike‐like laser pulses with energy efficiency at the level below 1 pJ per spike. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Extended Category Learning with Spiking Nets and Spike Timing Dependent Plasticity
- Author
-
Huyck, Christian, Samey, Carlos, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bramer, Max, editor, and Ellis, Richard, editor
- Published
- 2021
- Full Text
- View/download PDF
24. Utilizing the Neuronal Behavior of Spiking Neurons to Recognize Music Signals Based on Time Coding Features
- Author
-
Dhvani Shah, Ajit Narayanan, and Josafath Israel Espinosa-Ramos
- Subjects
Classification ,music ,spiking neurons ,spiking neural networks ,STDP ,temporal data ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper presents a Spiking Neural Network(SNN) architecture to distinguish two musical instruments: piano and violin. The acoustic characteristics of music such as frequency and time convey a lot of information that help humans in distinguishing music instruments within few seconds. SNNs are neural networks that work effectively with temporal data. In this study, 2-layer SNN temporal based architecture is implemented for instrument (piano and violin) recognition. Further, this research investigates the behaviour of spiking neurons for piano and violin samples through different spike based statistics. Additionally, a Gamma metric that utilises spike time information and Root Mean Square Error (RMSE) from the membrane potential are used for classification and recognition. SNN achieved an overall classification accuracy of 92.38% and 93.19%, indicating the potential of SNNs in this inherently temporal recognition and classification domain. On the other hand, we implemented rate-coding techniques using machine learning (ML) techniques. Through this research, we demonstrated that SNN are more effective than conventional ML methods for capturing important the acoustic characteristics of music such as frequency and time. Overall, this research showed the potential capability of temporal coding over rate coding techniques while processing spatial and temporal data.
- Published
- 2022
- Full Text
- View/download PDF
25. Functional relevance of inhibitory and disinhibitory circuits in signal propagation in recurrent neuronal networks
- Author
-
Bihun, Marzena Maria, Hennig, Matthias, and Wood, Emma
- Subjects
006.3 ,cell assemblies ,synfire chains ,spiking neurons ,disinhibitory pathways ,cholinergic modulation ,signal propagation - Abstract
Cell assemblies are considered to be physiological as well as functional units in the brain. A repetitive and stereotypical sequential activation of many neurons was observed, but the mechanisms underlying it are not well understood. Feedforward networks, such as synfire chains, with the pools of excitatory neurons unidirectionally connected and facilitating signal transmission in a cascade-like fashion were proposed to model such sequential activity. When embedded in a recurrent network, these were shown to destabilise the whole network’s activity, challenging the suitability of the model. Here, we investigate a feedforward chain of excitatory pools enriched by inhibitory pools that provide disynaptic feedforward inhibition. We show that when embedded in a recurrent network of spiking neurons, such an augmented chain is capable of robust signal propagation. We then investigate the influence of overlapping two chains on the signal transmission as well as the stability of the host network. While shared excitatory pools turn out to be detrimental to global stability, inhibitory overlap implicitly realises the motif of lateral inhibition, which, if moderate, maintains the stability but if substantial, it silences the whole network activity including the signal. Addition of a disinhibitory pathway along the chain proves to rescue the signal transmission by transforming a strong inhibitory wave into a disinhibitory one, which specifically guards the excitatory pools from receiving excessive inhibition and thereby allowing them to remain responsive to the forthcoming activation. Disinhibitory circuits not only improve the signal transmission, but can also control it via a gating mechanism. We demonstrate that by manipulating a firing threshold of the disinhibitory neurons, the signal transmission can be enabled or completely blocked. This mechanism corresponds to cholinergic modulation, which was shown to be signalled by volume as well as phasic transmission and variably target classes of neurons. Furthermore, we show that modulation of the feedforward inhibition circuit can promote generating spontaneous replay at the absence of external inputs. This mechanism, however, tends to also cause global instabilities. Overall, these results underscore the importance of inhibitory neuron populations in controlling signal propagation in cell assemblies as well as global stability. Specific inhibitory circuits, when controlled by neuromodulatory systems, can robustly guide or block the signals and invoke replay. This mounts to evidence that the population of interneurons is diverse and can be best categorised by neurons’ specific circuit functions as well as their responsiveness to neuromodulators.
- Published
- 2018
26. Voltage slope guided learning in spiking neural networks.
- Author
-
Lvhui Hu and Xin Liao
- Subjects
ARTIFICIAL neural networks ,SPEECH processing systems ,MACHINE learning ,MEMBRANE potential ,VOLTAGE ,MEDICAL coding - Abstract
A thorny problem in machine learning is how to extract useful clues related to delayed feedback signals from the clutter of input activity, known as the temporal credit-assignment problem. The aggregate-label learning algorithms make an explicit representation of this problem by training spiking neurons to assign the aggregate feedback signal to potentially effective clues. However, earlier aggregate-label learning algorithms suffered from ineficiencies due to the large amount of computation, while recent algorithms that have solved this problem may fail to learn due to the inability to find adjustment points. Therefore, we propose a membrane voltage slope guided algorithm (VSG) to further cope with this limitation. Direct dependence on the membrane voltage when finding the key point of weight adjustment makes VSG avoid intensive calculation, butmore importantly, themembrane voltage that always exists makes it impossible to lose the adjustment point. Experimental results show that the proposed algorithm can correlate delayed feedback signals with the effective clues embedded in background spiking activity, and also achieves excellent performance on real medical classification datasets and speech classification datasets. The superior performancemakes it ameaningful reference for aggregate-label learning on spiking neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. A High-Accuracy Digital Implementation of the Morris–Lecar Neuron With Variable Physiological Parameters.
- Author
-
Leigh, Alexander J., Heidarpur, Moslem, and Mirhassani, Mitra
- Abstract
A highly accurate digital implementation of the Morris-Lecar neuron model is presented with the intended application of hardware acceleration for neuroscience simulation. The novel implementation employs the COordinate Rotation DIgital Computer (CORDIC) algorithm to create a fixed-point implementation that is not only very accurate but requires low digital hardware resources. The accuracy exceeds that of the current state-of-the-art, requires fewer hardware resources to implement, and operates at a higher maximum clock frequency. The design is validated on FPGA and a normalized RMSE of 0.2039 is achieved at a maximum clock frequency of 378.07MHz. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Supervised Learning of Actino Selection in Cognitive Spiking Neoron Models
- Author
-
Stewart, Terrence C, Thorgeirsson, Sverrir, and Eliasmith, Chris
- Subjects
neural engineering framework ,neural production systems ,semantic pointer architecture ,spiking neurons ,basal ganglia ,neural cognitive architectures - Abstract
We have previously shown that a biologically realistic spikingneuron implementation of an action selection/executionsystem (constrained by the neurological connectivity of thecortex, basal ganglia, and thalamus) is capable of performingcomplex tasks, such as the Tower of Hanoi, n-Back, andsemantic memory search. However, because the neuralimplementation approximates a strict rule-based structure of aproduction system, such models have involved hand-tweakingof multiple parameters to get the desired behaviour. Here, weshow that a simple, local, online learning rule can be used tolearn these parameters, resulting in neural models of cognitivebehaviours that are more reliable and easier to construct thanwith prior methods.
- Published
- 2018
29. Neuroscience inspired neural operator for partial differential equations.
- Author
-
Garg, Shailesh and Chakraborty, Souvik
- Subjects
- *
ARTIFICIAL neural networks , *PARTIAL differential equations , *DIFFERENTIAL operators , *ARTIFICIAL intelligence , *BURGERS' equation - Abstract
We propose, in this paper, a Variable Spiking Wavelet Neural Operator (VS-WNO), which aims to bridge the gap between theoretical and practical implementation of Artificial Intelligence (AI) algorithms for mechanics applications. With recent developments like the introduction of neural operators, AI's potential for being used in mechanics applications has increased significantly. However, AI's immense energy and resource requirements are a hurdle in its practical field use case. The proposed VS-WNO is based on the principles of spiking neural networks, which have shown promise in reducing the energy requirements of the neural networks. This makes possible the use of such algorithms in edge computing. The proposed VS-WNO utilizes variable spiking neurons, which promote sparse communication, thus conserving energy, and its use is further supported by its ability to tackle regression tasks, often faced in the field of mechanics. Various examples dealing with partial differential equations, like Burger's equation, Allen Cahn's equation, and Darcy's equation, have been shown. Comparisons have been shown against wavelet neural operator utilizing leaky integrate and fire neurons (direct and encoded inputs) and vanilla wavelet neural operator utilizing artificial neurons. The results produced illustrate the ability of the proposed VS-WNO to converge to ground truth while promoting sparse communication. • Neuroscience inspired operator learning is proposed for scientific computing. • Proposed VS-WNO promotes sparse communications and is energy efficient. • We introduce a tailored spiking loss function to limit spiking activity. • Numerical examples solved illustrate accuracy and energy efficiency of VS-WNO. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Meta-learning in spiking neural networks with reward-modulated STDP.
- Author
-
Gholamzadeh Khoee, Arsham, Javaheri, Alireza, Kheradpisheh, Saeed Reza, and Ganjtabesh, Mohammad
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *EPISODIC memory , *PREFRONTAL cortex , *LEARNING ability - Abstract
The human brain constantly learns and rapidly adapts to new situations by integrating acquired knowledge and experiences into memory. Developing this capability in machine learning models is considered an important goal of AI research since deep neural networks perform poorly when there is limited data or when they need to adapt quickly to new unseen tasks. Meta-learning models are proposed to facilitate quick learning in low-data regimes by employing absorbed information from the past. Although some models have recently been introduced that reached high-performance levels, they are not biologically plausible. In our research, we have proposed a bio-plausible meta-learning model inspired by the hippocampus and the prefrontal cortex using spiking neural networks with a reward-based learning system. The major contribution of our work lies in the design of a bio-plausible meta-learning framework that incorporates learning rules such as Spike-Timing-Dependent Plasticity (STDP) and Reward-Modulated STDP (R-STDP). This framework not only reflects biological learning mechanisms more accurately but also attains competitive results comparable to those achieved by traditional gradient descent-based approaches in meta-learning. Our proposed model includes a memory designed to prevent catastrophic forgetting, a phenomenon that occurs when meta-learning models forget what they have learned so far as learning the new task begins. Furthermore, our new model can easily be applied to spike-based neuromorphic devices and enables fast learning in neuromorphic hardware. The implications and predictions of various models for solving few-shot classification tasks are extensively analyzed. Base on the results, our model has demonstrated the ability to compete with the existing state-of-the-art meta-learning techniques, representing a significant step towards creating AI systems that emulate the human brain's ability to learn quickly and efficiently from limited data. • "Higher accuracy & generalization w.r.t SOTA methods in few-shot classification tasks." • "Improved the generalization of meta-SNNs by simulating an efficient episodic memory." • "Demonstrating the potential of using reward-modulated STDP in SNNS for meta-learning." [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A surrogate gradient spiking baseline for speech command recognition.
- Author
-
Bittar, Alexandre and Garner, Philip N.
- Abstract
Artificial neural networks (ANNs) are the basis of recent advances in artificial intelligence (AI); they typically use real valued neuron responses. By contrast, biological neurons are known to operate using spike trains. In principle, spiking neural networks (SNNs) may have a greater representational capability than ANNs, especially for time series such as speech; however their adoption has been held back by both a lack of stable training algorithms and a lack of compatible baselines. We begin with a fairly thorough review of literature around the conjunction of ANNs and SNNs. Focusing on surrogate gradient approaches, we proceed to define a simple but relevant evaluation based on recent speech command tasks. After evaluating a representative selection of architectures, we show that a combination of adaptation, recurrence and surrogate gradients can yield light spiking architectures that are not only able to compete with ANN solutions, but also retain a high degree of compatibility with them in modern deep learning frameworks. We conclude tangibly that SNNs are appropriate for future research in AI, in particular for speech processing applications, and more speculatively that they may also assist in inference about biological function. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. A neuroscience-inspired spiking neural network for EEG-based auditory spatial attention detection.
- Author
-
Faghihi, Faramarz, Cai, Siqi, and Moustafa, Ahmed A.
- Subjects
- *
AUDITORY selective attention , *ARTIFICIAL neural networks , *ATTENTION testing , *AUDITORY neurons , *ENTORHINAL cortex , *ALPHA rhythm - Abstract
Recent studies have shown that alpha oscillations (8–13 Hz) enable the decoding of auditory spatial attention. Inspired by sparse coding in cortical neurons, we propose a spiking neural network model for auditory spatial attention detection. The proposed model can extract the patterns of recorded EEG of leftward and rightward attention, independently, and uses them to train the network to detect auditory spatial attention. Specifically, our model is composed of three layers, two of which are Integrate and Fire spiking neurons. We formulate a new learning rule that is based on the firing rate of pre- and post-synaptic neurons in the first and second layers of spiking neurons. The third layer has 10 spiking neurons and the pattern of their firing rate is used in the test phase to decode the auditory spatial attention of a given test sample. Moreover, the effects of using low connectivity rates of the layers and specific range of learning parameters of the learning rule are investigated. The proposed model achieves an average accuracy of 90% with only 10% of EEG signals as training data. This study also provides new insights into the role of sparse coding in both cortical networks subserving cognitive tasks and brain-inspired machine learning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Artificial Multisensory Neurons with Fused Haptic and Temperature Perception for Multimodal In‐Sensor Computing.
- Author
-
Duan, Qingxi, Zhang, Teng, Liu, Chang, Yuan, Rui, Li, Ge, Jun Tiw, Pek, Yang, Ke, Ge, Chen, Yang, Yuchao, and Huang, Ru
- Subjects
METAL-insulator transitions ,NEURONS ,PATTERN recognition systems ,TEMPERATURE ,OXYGEN consumption ,SENSORY neurons ,SENSE organs - Abstract
The human receives and transmits various information from the outside world through different sensory systems. The sensory neurons integrate various sensory inputs into a synthetical perception to monitor complex environments, and this fundamentally determines the way how we perceive the world. Developing multifunctional artificial sensory elements that can integrate multisensory perception plays a vital role in future intelligent perception systems, whereas prior spiking neurons reported can only handle single‐mode physical signals. Herein, a bioinspired haptic‐temperature fusion spiking neuron based upon a serial connection of piezoresistive sensor and VO2 volatile memristor is presented. The artificial sensory neuron is capable of detecting and encoding pressure and temperature inputs based on the voltage dividing effect and the intrinsic thermal sensitivity of metal–insulator transition in VO2. Recognition of Braille characters is achieved through multiple piezoresistive sensors, taking advantage of the spatial integration capabilities of such spiking neurons. Notably, the traditionally separate haptic and temperature signals can be fused physically in the sensory neuron when synchronizing the two sensory cues, which is able to recognize multimodal haptic/temperature patterns. The artificial multisensory neuron thus provides a promising approach toward e‐skin, neurorobotics, and human–machine interaction technologies. A preprint version of the article can be found at: https://www.authorea.com/doi/full/10.22541/au.164668806.60849882. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Silicon Modeling of Spiking Neurons With Diverse Dynamic Behaviors.
- Author
-
Ni, Shenglan, Chen, Houpeng, Li, Xi, Lei, Yu, Wang, Qian, Lv, Yi, Zhang, Guangming, Song, Sannian, and Song, Zhitang
- Subjects
- *
INTEGRATED circuits , *NEURAL circuitry , *PHASE diagrams , *SILICON , *MACHINE learning , *BIOLOGICAL systems - Abstract
Since spiking neural networks (SNNs) can effectively simulate the information processing mechanism of the biological cortex, they are expected to bridge the gap between neuroscience and machine learning. The hardware simulation of large-scale SNNs requires a simple and versatile silicon neuron model framework. In this article, a spiking neuron circuit as the core device of SNNs is presented. The proposed neuron circuit can mimic the dynamics of different types of biological neurons by adjusting the bias voltage. In order to facilitate the implementation of the spiking neuron circuit based on complementary metal-oxide-semiconductor (CMOS) and reduce the overhead of the circuit area, a modified Mihalas–Niebur (MN) mathematical model is adopted. The improved MN model is biologically plausible and can still successfully display all dynamic behaviors observed in biology. The function of the proposed neuron circuit has been verified by the phase diagram analysis method. The simulation results show the designed neuron circuit can successfully replicate 15 of the 20 firing patterns exhibited by the biological cortex, which proves that the neuron can act as a universal spiking neuron in very large-scale integrated circuit (VLSI) neuromorphic networks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Exact low-dimensional description for fast neural oscillations with low firing rates
- Author
-
Universitat Politècnica de Catalunya. Departament de Matemàtiques, Clusella Coberó, Pau, Montbrió Fairen, Ernest, Universitat Politècnica de Catalunya. Departament de Matemàtiques, Clusella Coberó, Pau, and Montbrió Fairen, Ernest
- Abstract
Recently, low-dimensional models of neuronal activity have been exactly derived for large networks of deterministic, quadratic integrate-and-fire (QIF) neurons. Such firing rate models (FRM) describe the emergence of fast collective oscillations (>30 Hz) via the frequency locking of a subset of neurons to the global oscillation frequency. However, the suitability of such models to describe realistic neuronal states is seriously challenged by the fact that during episodes of fast collective oscillations, neuronal discharges are often very irregular and have low firing rates compared to the global oscillation frequency. Here we extend the theory to derive exact FRM for QIF neurons to include noise and show that networks of stochastic neurons displaying irregular discharges at low firing rates during episodes of fast oscillations are governed by exactly the same evolution equations as deterministic networks. Our results reconcile two traditionally confronted views on neuronal synchronization and upgrade the applicability of exact FRM to describe a broad range of biologically realistic neuronal states., The authors thank Jordi Garcia-Ojalvo for helpful discussions. PC acknowledges financial support from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101017716 (Neurotwin). EM acknowledges support by the Agencia Estatal de Investigación under the Project No. PID2019-109918GB-I00., Peer Reviewed, Postprint (author's final draft)
- Published
- 2024
36. Artificial Multisensory Neurons with Fused Haptic and Temperature Perception for Multimodal In‐Sensor Computing
- Author
-
Qingxi Duan, Teng Zhang, Chang Liu, Rui Yuan, Ge Li, Pek Jun Tiw, Ke Yang, Chen Ge, Yuchao Yang, and Ru Huang
- Subjects
haptic perceptions ,in-sensor computing ,spiking neurons ,temperature perceptions ,VO2 volatile memristors ,Computer engineering. Computer hardware ,TK7885-7895 ,Control engineering systems. Automatic machinery (General) ,TJ212-225 - Abstract
The human receives and transmits various information from the outside world through different sensory systems. The sensory neurons integrate various sensory inputs into a synthetical perception to monitor complex environments, and this fundamentally determines the way how we perceive the world. Developing multifunctional artificial sensory elements that can integrate multisensory perception plays a vital role in future intelligent perception systems, whereas prior spiking neurons reported can only handle single‐mode physical signals. Herein, a bioinspired haptic‐temperature fusion spiking neuron based upon a serial connection of piezoresistive sensor and VO2 volatile memristor is presented. The artificial sensory neuron is capable of detecting and encoding pressure and temperature inputs based on the voltage dividing effect and the intrinsic thermal sensitivity of metal–insulator transition in VO2. Recognition of Braille characters is achieved through multiple piezoresistive sensors, taking advantage of the spatial integration capabilities of such spiking neurons. Notably, the traditionally separate haptic and temperature signals can be fused physically in the sensory neuron when synchronizing the two sensory cues, which is able to recognize multimodal haptic/temperature patterns. The artificial multisensory neuron thus provides a promising approach toward e‐skin, neurorobotics, and human–machine interaction technologies. A preprint version of the article can be found at: https://www.authorea.com/doi/full/10.22541/au.164668806.60849882.
- Published
- 2022
- Full Text
- View/download PDF
37. A surrogate gradient spiking baseline for speech command recognition
- Author
-
Alexandre Bittar and Philip N. Garner
- Subjects
spiking neurons ,physiologically plausible models ,deep learning ,signal processing ,speech recognition ,surrogate gradient learning ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Artificial neural networks (ANNs) are the basis of recent advances in artificial intelligence (AI); they typically use real valued neuron responses. By contrast, biological neurons are known to operate using spike trains. In principle, spiking neural networks (SNNs) may have a greater representational capability than ANNs, especially for time series such as speech; however their adoption has been held back by both a lack of stable training algorithms and a lack of compatible baselines. We begin with a fairly thorough review of literature around the conjunction of ANNs and SNNs. Focusing on surrogate gradient approaches, we proceed to define a simple but relevant evaluation based on recent speech command tasks. After evaluating a representative selection of architectures, we show that a combination of adaptation, recurrence and surrogate gradients can yield light spiking architectures that are not only able to compete with ANN solutions, but also retain a high degree of compatibility with them in modern deep learning frameworks. We conclude tangibly that SNNs are appropriate for future research in AI, in particular for speech processing applications, and more speculatively that they may also assist in inference about biological function.
- Published
- 2022
- Full Text
- View/download PDF
38. Spike-Timing-Dependent Plasticity With Activation-Dependent Scaling for Receptive Fields Development.
- Author
-
Bialas, Marcin and Mandziuk, Jacek
- Subjects
- *
ARTIFICIAL neural networks - Abstract
Spike-timing-dependent plasticity (STDP) is one of the most popular and deeply biologically motivated forms of unsupervised Hebbian-type learning. In this article, we propose a variant of STDP extended by an additional activation-dependent scale factor. The consequent learning rule is an efficient algorithm, which is simple to implement and applicable to spiking neural networks (SNNs). It is demonstrated that the proposed plasticity mechanism combined with competitive learning can serve as an effective mechanism for the unsupervised development of receptive fields (RFs). Furthermore, the relationship between synaptic scaling and lateral inhibition is explored in the context of the successful development of RFs. Specifically, we demonstrate that maintaining a high level of synaptic scaling followed by its rapid increase is crucial for the development of neuronal mechanisms of selectivity. The strength of the proposed solution is assessed in classification tasks performed on the Modified National Institute of Standards and Technology (MNIST) data set with an accuracy level of 94.65% (a single network) and 95.17% (a network committee)—comparable to the state-of-the-art results of single-layer SNN architectures trained in an unsupervised manner. Furthermore, the training process leads to sparse data representation and the developed RFs have the potential to serve as local feature detectors in multilayered spiking networks. We also prove theoretically that when applied to linear Poisson neurons, our rule conserves total synaptic strength, guaranteeing the convergence of the learning process. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. A scaleable spiking neural model of action planning
- Author
-
Blouw, Peter, Eliasmith, Chris, and Tripp, Bryan P.
- Subjects
planning ,affordances ,spiking neurons ,neural en-gineering framework ,semantic pointer architecture - Abstract
Past research on action planning has shed light on the neuralmechanisms underlying the selection of simple motor actions,along with the cognitive mechanisms underlying the planningof action sequences in constrained problem solving domains.We extend this research by describing a neural model thatrapidly plans action sequences in relatively unconstrained do-mains by manipulating structured representations of objectsand the actions they typically afford. We provide an analysisthat indicates our model is able to reliably accomplish goalsthat require correctly performing a sequence of up to 5 actionsin a simulated environment. We also provide an analysis ofthe scaling properties of our model with respect to the num-ber of objects and affordances that constitute its knowledgeof the environment. Using simplified simulations we find thatour model is likely to function effectively while picking from10,000 actions related to 25,000 objects.
- Published
- 2016
40. Neural 2D Cart and Pole Control and Forward Model
- Author
-
Horton, Paul, Huyck, Chris, Wang, Xiaochen, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, and Samsonovich, Alexei V., editor
- Published
- 2019
- Full Text
- View/download PDF
41. Spiking Neural Models and Their Application in DNA Microarrays Classification
- Author
-
Vazquez, Roberto A., Garro, Beatriz A., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Tan, Ying, editor, Shi, Yuhui, editor, and Niu, Ben, editor
- Published
- 2019
- Full Text
- View/download PDF
42. Modelling an Adaptive Learning System Using Artificial Intelligence.
- Author
-
Dakheel AL-Fayyadh, Hayder Rahm, Ganim Ali, Salam Abdulabbas, and Abood, Basim
- Subjects
- *
INSTRUCTIONAL systems , *ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *ADAPTIVE computing systems , *REPRESENTATIONS of graphs , *DEEP learning - Abstract
The goal of this paper is to use artificial intelligence to build and evaluate an adaptive learning system where we adopt the basic approaches of spiking neural networks as well as artificial neural networks. Spiking neural networks receive increasing attention due to their advantages over traditional artificial neural networks. They have proven to be energy efficient, biological plausible, and up to 105 times faster if they are simulated on analogue traditional learning systems. Artificial neural network libraries use computational graphs as a pervasive representation, however, spiking models remain heterogeneous and difficult to train. Using the artificial intelligence deductive method, the paper posits two hypotheses that examines whether 1) there exists a common representation for both neural networks paradigms for tutorial mentoring, and whether 2) spiking and non-spiking models can learn a simple recognition task for learning activities for adaptive learning. The first hypothesis is confirmed by specifying and implementing a domain-specific language that generates semantically similar spiking and non-spiking neural networks for tutorial mentoring. Through three classification experiments, the second hypothesis is shown to hold for non-spiking models, but cannot be proven for the spiking models. The paper contributes three findings: 1) a domain-specific language for modelling neural network topologies in adaptive tutorial mentoring for students, 2) a preliminary model for generalizable learning through back-propagation in spiking neural networks for learning activities for students also represented in results section, and 3) a method for transferring optimised non-spiking parameters to spiking neural networks has also been developed for adaptive learning system. The latter contribution is promising because the vast machine learning literature can spill-over to the emerging field of spiking neural networks and adaptive learning computing. Future work includes improving the back-propagation model, exploring time-dependent models for learning, and adding support for adaptive learning systems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Extreme neural machines.
- Author
-
Boucher-Routhier, Megan, Zhang, Bill Ling Feng, and Thivierge, Jean-Philippe
- Subjects
- *
RECURRENT neural networks , *MOVIE scenes - Abstract
Recurrent neural networks can solve a variety of computational tasks and produce patterns of activity that capture key properties of brain circuits. However, learning rules designed to train these models are time-consuming and prone to inaccuracies when tuning connection weights located deep within the network. Here, we describe a rapid one-shot learning rule to train recurrent networks composed of biologically-grounded neurons. First, inputs to the model are compressed onto a smaller number of recurrent neurons. Then, a non-iterative rule adjusts the output weights of these neurons based on a target signal. The model learned to reproduce natural images, sequential patterns, as well as a high-resolution movie scene. Together, results provide a novel avenue for one-shot learning in biologically realistic recurrent networks and open a path to solving complex tasks by merging brain-inspired models with rapid optimization rules. • We developed a one-shot learning rule to train biologically-realistic recurrent networks. • The model learned to faithfully reproduce both static images and sequential patterns. • Results open the path to rapid learning in brain-like networks. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Sequence learning in a spiking neuronal network with memristive synapses
- Author
-
Younes Bouhadjar, Sebastian Siegel, Tom Tetzlaff, Markus Diesmann, Rainer Waser, and Dirk J Wouters
- Subjects
memristive devices ,sequence learning ,neuromorphic hardware ,brain-inspired computing ,plasticity rules ,spiking neurons ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Brain-inspired computing proposes a set of algorithmic principles that hold promise for advancing artificial intelligence. They endow systems with self learning capabilities, efficient energy usage, and high storage capacity. A core concept that lies at the heart of brain computation is sequence learning and prediction. This form of computation is essential for almost all our daily tasks such as movement generation, perception, and language. Understanding how the brain performs such a computation is not only important to advance neuroscience, but also to pave the way to new technological brain-inspired applications. A previously developed spiking neural network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. An emerging type of hardware that may efficiently run this type of algorithm is neuromorphic hardware. It emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate. Memristive devices have been identified as potential synaptic elements in neuromorphic hardware. In particular, redox-induced resistive random access memories (ReRAM) devices stand out at many aspects. They permit scalability, are energy efficient and fast, and can implement biological plasticity rules. In this work, we study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model. We implement and simulate the model including the ReRAM plasticity using the neural network simulator NEST. We investigate two types of ReRAM memristive devices: (i) a gradual, analog switching device, and (ii) an abrupt, binary switching device. We study the effect of different device properties on the performance characteristics of the sequence learning model, and demonstrate that, in contrast to many other artificial neural networks, this architecture is resilient with respect to changes in the on-off ratio and the conductance resolution, device variability, and device failure.
- Published
- 2023
- Full Text
- View/download PDF
45. Sequence Learning in a Single Trial: A Spiking Neurons Model Based on Hippocampal Circuitry.
- Author
-
Coppolino, Simone, Giacopelli, Giuseppe, and Migliore, Michele
- Subjects
- *
HIPPOCAMPUS (Brain) , *COGNITIVE ability , *NEURAL circuitry , *COMPUTER architecture - Abstract
In contrast with our everyday experience using brain circuits, it can take a prohibitively long time to train a computational system to produce the correct sequence of outputs in the presence of a series of inputs. This suggests that something important is missing in the way in which models are trying to reproduce basic cognitive functions. In this work, we introduce a new neuronal network architecture that is able to learn, in a single trial, an arbitrary long sequence of any known objects. The key point of the model is the explicit use of mechanisms and circuitry observed in the hippocampus, which allow the model to reach a level of efficiency and accuracy that, to the best of our knowledge, is not possible with abstract network implementations. By directly following the natural system’s layout and circuitry, this type of implementation has the additional advantage that the results can be more easily compared to the experimental data, allowing a deeper and more direct understanding of the mechanisms underlying cognitive functions and dysfunctions and opening the way to a new generation of learning architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. FPGA-NHAP: A General FPGA-Based Neuromorphic Hardware Acceleration Platform With High Speed and Low Power.
- Author
-
Liu, Yijun, Chen, Yuehai, Ye, Wujian, and Gui, Yu
- Subjects
- *
FIELD programmable gate arrays - Abstract
Spiking neural network (SNN) can process discrete spikes and offers a high degree of real-time performance and excellent energy efficiency ratio. However, most current neuromorphic hardware platforms lack efficient driven algorithms and only support a single type of neuron model, which has slow speed and poor scalability. This paper proposes a general FPGA-based neuromorphic hardware acceleration platform (FPGA-NHAP), supporting the effective inference and acceleration of SNN network with low power, high speed and good scalability. First, a neuron computing unit is designed to simulate the both LIF and Izhikevich (IZH) neurons with the parallel spike caching and scheduling technique. Second, a novel integrated driven update algorithm is proposed to complete the spike encoding of external data, reducing the waiting time of neuron state update effectively. Third, the proposed platform is implemented using a RISC-V processor and a Xilinx FPGA, simulating 16,384 neurons and 16.8 million synapses with a power consumption of 0.535 W. Finally, two different three-layer SNN networks are deployed on the proposed platform for recognition tasks on the MNIST and Fashion-MNIST datasets, achieving the accuracy of 97.70%, 85.14% (LIF) and 97.81%, 83.16% (IZH), frame rates of 208 frame/s, 128 frame/s (LIF) and 206 frame/s, 141 frame/s (IZH), respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Event-driven contrastive divergence: neural sampling foundations
- Author
-
Neftci, Emre, Das, Srinjoy, Pedroni, Bruno, Kreutz-Delgado, Kenneth, and Cauwenberghs, Gert
- Subjects
Biological Psychology ,Biomedical and Clinical Sciences ,Neurosciences ,Psychology ,Markov chain Monte Carlo ,neural sampling ,probabilistic inference ,spiking neurons ,synaptic plasticity ,Cognitive Sciences ,Biological psychology - Published
- 2015
48. A Supervised Learning Algorithm for Spiking Neurons Using Spike Train Kernel Based on a Unit of Pair-Spike
- Author
-
Guojun Chen and Guoen Wang
- Subjects
Direct computation ,spike selection ,spike train kernel ,spiking neural networks ,spiking neurons ,supervised learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In recent years, neuroscientists have discovered that the neural information is encoded by spike trains with precise times. Supervised learning algorithm based on the precise times for spiking neurons becomes an important research field. Although many existing algorithms have the excellent learning ability, most of their mechanisms still have some complex computations and certain limitations. Moreover, the discontinuity of spiking process also makes it very difficult to build an efficient algorithm. This paper proposes a supervised learning algorithm for spiking neurons using the kernel function of spike trains based on a unit of pair-spike. Firstly, we comprehensively divide the intervals of spike trains. Then, we construct an optimal selection and computation method of spikes based on the unit of pair-spike. This method avoids some wrong computations and reduces the computational cost by using each effective input spike only once in every epoch. Finally, we use the kernel function defined by an inner product operator to solve the computing problem of discontinue spike process and multiple output spikes. The proposed algorithm is successfully applied to many learning tasks of spike trains, where the effect of our optimal selection and computation method is verified and the influence of learning factors such as learning kernel, learning rate, and learning epoch is analyzed. Moreover, compared with other algorithms, all experimental results show that our proposed algorithm has the higher learning accuracy and good learning efficiency.
- Published
- 2020
- Full Text
- View/download PDF
49. Progress and Benchmark of Spiking Neuron Devices and Circuits
- Author
-
Fu-Xiang Liang, I-Ting Wang, and Tuo-Hung Hou
- Subjects
in-memory computing ,neuromorphic computing ,spiking neurons ,Computer engineering. Computer hardware ,TK7885-7895 ,Control engineering systems. Automatic machinery (General) ,TJ212-225 - Abstract
The sustainability of ever more sophisticated artificial intelligence relies on the continual development of highly energy‐efficient and compact computing hardware that mimics the biological neural networks. Recently, the neural firing properties have been widely explored in various spiking neuron devices, which could emerge as the fundamental building blocks of future neuromorphic/in‐memory computing hardware. By leveraging the intrinsic device characteristics, the device‐based spiking neuron has the potential advantage of a compact circuit area for implementing neural networks with high density and high parallelism. However, a comprehensive benchmark that considers not only the device but also the peripheral circuit necessary for realizing complete neural functions is still lacking. Herein, the recent progress of emerging spiking neuron devices and circuits is reviewed. By implementing peripheral analog circuits for supporting various spiking neuron devices in the in‐memory computing architecture, the advantages and challenges in area and energy efficiency are discussed by benchmarking various technologies. A small or even no membrane capacitor, a self‐reset property, and a high spiking frequency are highly desirable.
- Published
- 2021
- Full Text
- View/download PDF
50. Spike frequency adaptation supports network computations on temporally dispersed information
- Author
-
Darjan Salaj, Anand Subramoney, Ceca Kraisnikovic, Guillaume Bellec, Robert Legenstein, and Wolfgang Maass
- Subjects
computational neuroscience ,simulation ,working memory ,spiking neurons ,spike-frequency adaptation ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
For solving tasks such as recognizing a song, answering a question, or inverting a sequence of symbols, cortical microcircuits need to integrate and manipulate information that was dispersed over time during the preceding seconds. Creating biologically realistic models for the underlying computations, especially with spiking neurons and for behaviorally relevant integration time spans, is notoriously difficult. We examine the role of spike frequency adaptation in such computations and find that it has a surprisingly large impact. The inclusion of this well-known property of a substantial fraction of neurons in the neocortex – especially in higher areas of the human neocortex – moves the performance of spiking neural network models for computations on network inputs that are temporally dispersed from a fairly low level up to the performance level of the human brain.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.