131 results
Search Results
2. Applications of graph theory to the analysis of fNIRS data in hyperscanning paradigms.
- Author
-
Oku, Amanda Yumi Ambriola, Barreto, Candida, Bruneri, Guilherme, Brockington, Guilherme, Fujita, Andre, and Ricardo Sato, João
- Subjects
GRAPH theory ,DATA analysis ,TEMPOROPARIETAL junction ,PREFRONTAL cortex ,COINTEGRATION - Abstract
Hyperscanning is a promising tool for investigating the neurobiological underpinning of social interactions and affective bonds. Recently, graph theory measures, such as modularity, have been proposed for estimating the global synchronization between brains. This paper proposes the bootstrap modularity test as a way of determining whether a pair of brains is coactivated. This test is illustrated as a screening tool in an application to fNIRS data collected from the prefrontal cortex and temporoparietal junction of five dyads composed of a teacher and a preschooler while performing an interaction task. In this application, graph hub centrality measures identify that the dyad's synchronization is critically explained by the relation between teacher's language and number processing and the child's phonological processing. The analysis of these metrics may provide further insights into the neurobiological underpinnings of interaction, such as in educational contexts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Decoding the application of deep learning in neuroscience: a bibliometric analysis.
- Author
-
Yin Li and Zilong Zhong
- Subjects
DEEP learning ,BIBLIOMETRICS ,CONVOLUTIONAL neural networks ,TECHNOLOGICAL innovations ,CLASSIFICATION algorithms - Abstract
The application of deep learning in neuroscience holds unprecedented potential for unraveling the complex dynamics of the brain. Our bibliometric analysis, spanning from 2012 to 2023, delves into the integration of deep learning in neuroscience, shedding light on the evolutionary trends and identifying pivotal research hotspots. Through the examination of 421 articles, this study unveils a significant growth in interdisciplinary research, marked by the burgeoning application of deep learning techniques in understanding neural mechanisms and addressing neurological disorders. Central to our findings is the critical role of classification algorithms, models, and neural networks in advancing neuroscience, highlighting their efficacy in interpreting complex neural data, simulating brain functions, and translating theoretical insights into practical diagnostics and therapeutic interventions. Additionally, our analysis delineates a thematic evolution, showcasing a shift from foundational methodologies toward more specialized and nuanced approaches, particularly in areas like EEG analysis and convolutional neural networks. This evolution reflects the field's maturation and its adaptation to technological advancements. The study further emphasizes the importance of interdisciplinary collaborations and the adoption of cutting-edge technologies to foster innovation in decoding the cerebral code. The current study provides a strategic roadmap for future explorations, urging the scientific community toward areas ripe for breakthrough discoveries and practical applications. This analysis not only charts the past and present landscape of deep learning in neuroscience but also illuminates pathways for future research, underscoring the transformative impact of deep learning on our understanding of the brain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Electroencephalogram Emotion Recognition Based on 3D Feature Fusion and Convolutional Autoencoder
- Author
-
Yanling An, Shaohai Hu, Xiaoying Duan, Ling Zhao, Caiyun Xie, and Yingying Zhao
- Subjects
medicine.diagnostic_test ,Computer science ,Speech recognition ,Feature extraction ,Neuroscience (miscellaneous) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Electroencephalography ,Convolutional neural network ,Autoencoder ,Arousal ,differential entropy ,Differential entropy ,convolution neural network ,Cellular and Molecular Neuroscience ,ComputingMethodologies_PATTERNRECOGNITION ,emotion recognition ,medicine ,Key (cryptography) ,feature fusion ,stacked autoencoder ,Spatial analysis ,RC321-571 ,Neuroscience ,Original Research - Abstract
As one of the key technologies of emotion computing, emotion recognition has received great attention. Electroencephalogram (EEG) signals are spontaneous and difficult to camouflage, so they are used for emotion recognition in academic and industrial circles. In order to overcome the disadvantage that traditional machine learning based emotion recognition technology relies too much on a manual feature extraction, we propose an EEG emotion recognition algorithm based on 3D feature fusion and convolutional autoencoder (CAE). First, the differential entropy (DE) features of different frequency bands of EEG signals are fused to construct the 3D features of EEG signals, which retain the spatial information between channels. Then, the constructed 3D features are input into the CAE constructed in this paper for emotion recognition. In this paper, many experiments are carried out on the open DEAP dataset, and the recognition accuracy of valence and arousal dimensions are 89.49 and 90.76%, respectively. Therefore, the proposed method is suitable for emotion recognition tasks.
- Published
- 2021
- Full Text
- View/download PDF
5. Review on Emotion Recognition Based on Electroencephalography
- Author
-
Ying Zhang, Haoran Liu, Yujun Li, and Xiangyi Kong
- Subjects
SEED ,Computer science ,Feature extraction ,Neuroscience (miscellaneous) ,DEAP ,Feature selection ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Review ,Electroencephalography ,Convolutional neural network ,convolution neural network ,Cellular and Molecular Neuroscience ,Classifier (linguistics) ,emotion recognition ,medicine ,Preprocessor ,EEG ,Medical diagnosis ,medicine.diagnostic_test ,Representation (systemics) ,ComputingMethodologies_PATTERNRECOGNITION ,DREAMER ,Cognitive psychology ,Neuroscience ,RC321-571 - Abstract
Emotions are closely related to human behavior, family, and society. Changes in emotions can cause differences in electroencephalography (EEG) signals, which show different emotional states and are not easy to disguise. EEG-based emotion recognition has been widely used in human-computer interaction, medical diagnosis, military, and other fields. In this paper, we describe the common steps of an emotion recognition algorithm based on EEG from data acquisition, preprocessing, feature extraction, feature selection to classifier. Then, we review the existing EEG-based emotional recognition methods, as well as assess their classification effect. This paper will help researchers quickly understand the basic theory of emotion recognition and provide references for the future development of EEG. Moreover, emotion is an important representation of safety psychology.
- Published
- 2021
6. A Study on Arrhythmia via ECG Signal Classification Using the Convolutional Neural Network
- Author
-
Shen Yuong Wong, Yongdi Lu, Wenli Yang, and Mengze Wu
- Subjects
Heartbeat ,Computer science ,0206 medical engineering ,Neuroscience (miscellaneous) ,convolutional neural network ,02 engineering and technology ,feature classification ,Convolutional neural network ,lcsh:RC321-571 ,Cellular and Molecular Neuroscience ,Wavelet ,Robustness (computer science) ,anti-noise performance ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,Artificial neural network ,business.industry ,ECG ,Deep learning ,Feature recognition ,deep learning ,Pattern recognition ,020601 biomedical engineering ,Random forest ,ComputingMethodologies_PATTERNRECOGNITION ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Neuroscience - Abstract
Cardiovascular diseases (CVDs) are the leading cause of death today. The current identification method of the diseases is analyzing the Electrocardiogram (ECG), which is a medical monitoring technology recording cardiac activity. Unfortunately, looking for experts to analyze a large amount of ECG data consumes too many medical resources. Therefore, the method of identifying ECG characteristics based on machine learning has gradually become prevalent. However, there are some drawbacks to these typical methods, requiring manual feature recognition, complex models, and long training time. This paper proposes a robust and efficient 12-layer deep one-dimensional convolutional neural network on classifying the five micro-classes of heartbeat types in the MIT- BIH Arrhythmia database. The five types of heartbeat features are classified, and wavelet self-adaptive threshold denoising method is used in the experiments. Compared with BP neural network, random forest, and other CNN networks, the results show that the model proposed in this paper has better performance in accuracy, sensitivity, robustness, and anti-noise capability. Its accurate classification effectively saves medical resources, which has a positive effect on clinical practice.
- Published
- 2021
- Full Text
- View/download PDF
7. Applications of graph theory to the analysis of fNIRS data in hyperscanning paradigms
- Author
-
Amanda Yumi Ambriola Oku, Candida Barreto, Guilherme Bruneri, Guilherme Brockington, Andre Fujita, and João Ricardo Sato
- Subjects
fNIRS ,graph theory ,degree centrality ,eigenvector centrality and modularity ,neuroscience ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Hyperscanning is a promising tool for investigating the neurobiological underpinning of social interactions and affective bonds. Recently, graph theory measures, such as modularity, have been proposed for estimating the global synchronization between brains. This paper proposes the bootstrap modularity test as a way of determining whether a pair of brains is coactivated. This test is illustrated as a screening tool in an application to fNIRS data collected from the prefrontal cortex and temporoparietal junction of five dyads composed of a teacher and a preschooler while performing an interaction task. In this application, graph hub centrality measures identify that the dyad's synchronization is critically explained by the relation between teacher's language and number processing and the child's phonological processing. The analysis of these metrics may provide further insights into the neurobiological underpinnings of interaction, such as in educational contexts.
- Published
- 2022
- Full Text
- View/download PDF
8. Recent Trends in Non-invasive Neural Recording Based Brain-to-Brain Synchrony Analysis on Multidisciplinary Human Interactions for Understanding Brain Dynamics: A Systematic Review.
- Author
-
Nazneen, Tahnia, Islam, Iffath Binta, Sajal, Md. Sakibur Rahman, Jamal, Wasifa, Amin, M. Ashraful, Vaidyanathan, Ravi, Chau, Tom, and Mamun, Khondaker A.
- Subjects
SYNCHRONIC order ,SOCIAL interaction ,INTERDISCIPLINARY research ,BRAIN-computer interfaces ,WAVELET transforms ,COGNITIVE neuroscience - Abstract
The study of brain-to-brain synchrony has a burgeoning application in the brain-computer interface (BCI) research, offering valuable insights into the neural underpinnings of interacting human brains using numerous neural recording technologies. The area allows exploring the commonality of brain dynamics by evaluating the neural synchronization among a group of people performing a specified task. The growing number of publications on brain-to-brain synchrony inspired the authors to conduct a systematic review using the PRISMA protocol so that future researchers can get a comprehensive understanding of the paradigms, methodologies, translational algorithms, and challenges in the area of brain-to-brain synchrony research. This review has gone through a systematic search with a specified search string and selected some articles based on pre-specified eligibility criteria. The findings from the review revealed that most of the articles have followed the social psychology paradigm, while 36% of the selected studies have an application in cognitive neuroscience. The most applied approach to determine neural connectivity is a coherence measure utilizing phase-locking value (PLV) in the EEG studies, followed by wavelet transform coherence (WTC) in all of the fNIRS studies. While most of the experiments have control experiments as a part of their setup, a small number implemented algorithmic control, and only one study had interventional or a stimulus-induced control experiment to limit spurious synchronization. Hence, to the best of the authors' knowledge, this systematic review solely contributes to critically evaluating the scopes and technological advances of brain-to-brain synchrony to allow this discipline to produce more effective research outcomes in the remote future. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Revealing the Computational Meaning of Neocortical Interarea Signals
- Author
-
Hiroshi Yamakawa
- Subjects
0301 basic medicine ,Computer science ,Thalamus ,Neuroscience (miscellaneous) ,lcsh:RC321-571 ,law.invention ,corticocortical circuit ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Biological constraints ,BDI logic ,Relay ,law ,Hypothesis and Theory ,medicine ,Reinforcement learning ,predictive coding ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Emulation ,Neocortex ,Feed forward ,Representation (systemics) ,core and matrix thalamocortical circuit ,emulation theory of representation ,030104 developmental biology ,medicine.anatomical_structure ,nervous system ,intention ,plan ,Neuroscience ,030217 neurology & neurosurgery - Abstract
To understand the function of the neocortex, which is a hierarchical distributed network, it is useful giving meaning to the signals transmitted between these areas from the computational viewpoint. The overall anatomical structure or organs related to this network, including the neocortex, thalamus, and basal ganglia, has been roughly revealed, and much physiological knowledge, though often fragmentary, is being accumulated. The computational theories involving the neocortex have also been developed considerably. By introducing the assumption “The signals transmitted by interarea axonal projections of pyramidal cells in the neocortex carry different meanings for each cell type, common to all areas,” derived from its nature as a distributed network in the neocortex, allows us to specify the computational meanings of interarea signals. In this paper, first, the types of signals exchanged between neocortical areas are investigated, taking into account biological constraints, and employing theories such as predictive coding, reinforcement learning, representation emulation theory, and BDI logic as theoretical starting points, two types of feedforward signals (observation and deviation) and three types of feedback signals (prediction, plan, and intention) are identified. Next, based on the anatomical knowledge of the neocortex and thalamus, the pathways connecting the areas are organized and summarized as three corticocortical pathways and two thalamocortical pathways. Using this summation as preparation, this paper proposes a hypothesis that gives meaning to each type of signals transmitted in the different pathways in the neocortex, from the viewpoint of their functions. This hypothesis reckons that the feedforward corticocortical pathway transmits observation signals, the feedback corticocortical pathway transmits prediction signals, and the corticothalamic pathway mediated by core relay cells transmits deviation signals. The thalamocortical pathway, which is mediated by matrix relay cells, would be responsible for transmitting the signals that activate a part of prediction signals as intentions, due to the reason that the nature of the other available feedback pathways are not sufficient for conveying plans and intentions as signals. The corticocortical pathway, which is projected from various IT cells to the first layer, would be responsible for transmitting signals that activate a part of prediction signals as plans.
- Published
- 2020
10. Data Augmentation for Brain-Tumor Segmentation: A Review
- Author
-
Jakub Nalepa, Michal Marcinkiewicz, and Michal Kawulok
- Subjects
0301 basic medicine ,Computer science ,Neuroscience (miscellaneous) ,Review ,Machine learning ,computer.software_genre ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Segmentation ,image segmentation ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,business.industry ,Deep learning ,deep neural network ,deep learning ,Image segmentation ,030104 developmental biology ,Deep neural networks ,Artificial intelligence ,business ,Brain tumor segmentation ,computer ,030217 neurology & neurosurgery ,Neuroscience ,MRI ,data augmentation - Abstract
Data augmentation is a popular technique which helps improve generalization capabilities of deep neural networks, and can be perceived as implicit regularization. It plays a pivotal role in scenarios in which the amount of high-quality ground-truth data is limited, and acquiring new examples is costly and time-consuming. This is a very common problem in medical image analysis, especially tumor delineation. In this paper, we review the current advances in data-augmentation techniques applied to magnetic resonance images of brain tumors. To better understand the practical aspects of such algorithms, we investigate the papers submitted to the Multimodal Brain Tumor Segmentation Challenge (BraTS 2018 edition), as the BraTS dataset became a standard benchmark for validating existent and emerging brain-tumor detection and segmentation techniques. We verify which data augmentation approaches were exploited and what was their impact on the abilities of underlying supervised learners. Finally, we highlight the most promising research directions to follow in order to synthesize high-quality artificial brain-tumor examples which can boost the generalization abilities of deep models.
- Published
- 2019
11. An Investigation of the Free Energy Principle for Emotion Recognition
- Author
-
Thomas Parr, Karl J. Friston, and Daphne Demekas
- Subjects
0301 basic medicine ,Computer science ,Neuroscience (miscellaneous) ,emotion recognition (ER) ,Inference ,Review ,Lexicon ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,active inference ,Theory of mind ,Markov blanket (MB) ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Free energy principle ,Cognitive science ,business.industry ,Deep learning ,artificial intelligence ,Generative model ,030104 developmental biology ,free energy (Helmholtz energy) ,Prosocial behavior ,bayesian brain ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Reciprocal ,Neuroscience - Abstract
This paper offers a prospectus of what might be achievable in the development of emotional recognition devices. It provides a conceptual overview of the free energy principle; including Markov blankets, active inference, and-in particular-a discussion of selfhood and theory of mind, followed by a brief explanation of how these concepts can explain both neural and cultural models of emotional inference. The underlying hypothesis is that emotion recognition and inference devices will evolve from state-of-the-art deep learning models into active inference schemes that go beyond marketing applications and become adjunct to psychiatric practice. Specifically, this paper proposes that a second wave of emotion recognition devices will be equipped with an emotional lexicon (or the ability to epistemically search for one), allowing the device to resolve uncertainty about emotional states by actively eliciting responses from the user and learning from these responses. Following this, a third wave of emotional devices will converge upon the user's generative model, resulting in the machine and human engaging in a reciprocal, prosocial emotional interaction, i.e., sharing a generative model of emotional states.
- Published
- 2019
12. Symmetry-Based Representations for Artificial and Biological General Intelligence.
- Author
-
Higgins, Irina, Racanière, Sébastien, and Rezende, Danilo
- Subjects
CONSERVED quantity ,ARTIFICIAL intelligence ,MACHINE learning ,SWARM intelligence ,SYMMETRY - Abstract
Biological intelligence is remarkable in its ability to produce complex behavior in many diverse situations through data efficient, generalizable, and transferable skill acquisition. It is believed that learning "good" sensory representations is important for enabling this, however there is little agreement as to what a good representation should look like. In this review article we are going to argue that symmetry transformations are a fundamental principle that can guide our search for what makes a good representation. The idea that there exist transformations (symmetries) that affect some aspects of the system but not others, and their relationship to conserved quantities has become central in modern physics, resulting in a more unified theoretical framework and even ability to predict the existence of new particles. Recently, symmetries have started to gain prominence in machine learning too, resulting in more data efficient and generalizable algorithms that can mimic some of the complex behaviors produced by biological intelligence. Finally, first demonstrations of the importance of symmetry transformations for representation learning in the brain are starting to arise in neuroscience. Taken together, the overwhelming positive effect that symmetries bring to these disciplines suggest that they may be an important general framework that determines the structure of the universe, constrains the nature of natural tasks and consequently shapes both biological and artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. A Game-Theoretical Network Formation Model for C. elegans Neural Network
- Author
-
Mohamad Khajezade, Sama Goliaei, and Hadi Veisi
- Subjects
game theory ,0301 basic medicine ,Nervous system ,Computer science ,media_common.quotation_subject ,Neuroscience (miscellaneous) ,complex network analysis ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,C. elegans neural network ,Methods ,medicine ,Biological neural network ,Function (engineering) ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,media_common ,Computational neuroscience ,Artificial neural network ,business.industry ,network formation models ,C. elegans frontal neural network ,Network formation ,030104 developmental biology ,medicine.anatomical_structure ,Neuron ,Artificial intelligence ,business ,Game theory ,030217 neurology & neurosurgery ,Neuroscience ,computational neuroscience - Abstract
Studying and understanding human brain structures and functions have become one of the most challenging issues in neuroscience today. However, the mammalian nervous system is made up of hundreds of millions of neurons and billions of synapses. This complexity made it impossible to reconstruct such a huge nervous system in the laboratory. So, most researchers focus on C. elegans neural network. The C. elegans neural network is the only biological neural network that is fully mapped. This nervous system is the simplest neural network that exists. However, many fundamental behaviors like movement emerge from this basic network. These features made C. elegans a convenient case to study the nervous systems. Many studies try to propose a network formation model for C. elegans neural network. However, these studies could not meet all characteristics of C. elegans neural network, such as significant factors that play a role in the formation of C. elegans neural network. Thus, new models are needed to be proposed in order to explain all aspects of C. elegans neural network. In this paper, a new model based on game theory is proposed in order to understand the factors affecting the formation of nervous systems, which meet the C. elegans frontal neural network characteristics. In this model, neurons are considered to be agents. The strategy for each neuron includes either making or removing links to other neurons. After choosing the basic network, the utility function is built using structural and functional factors. In order to find the coefficients for each of these factors, linear programming is used. Finally, the output network is compared with C. elegans frontal neural network and previous models. The results implicate that the game-theoretical model proposed in this paper can better predict the influencing factors in the formation of C. elegans neural network compared to previous models.
- Published
- 2019
- Full Text
- View/download PDF
14. Compensation for Traveling Wave Delay Through Selection of Dendritic Delays Using Spike-Timing-Dependent Plasticity in a Model of the Auditory Brainstem
- Author
-
Martin J. Spencer, Hamish Meffin, Anthony N. Burkitt, and David B. Grayden
- Subjects
Computer science ,Neuroscience (miscellaneous) ,Sensory system ,Monaural ,cochlear nucleus ,01 natural sciences ,Cochlear nucleus ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Octopus ,0302 clinical medicine ,biology.animal ,0103 physical sciences ,Learning rule ,auditory brainstem ,octopus cells ,dendritic delay ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,010301 acoustics ,biology ,Spike-timing-dependent plasticity ,spike-timing dependent plasticity ,Asynchrony (computer programming) ,Synaptic plasticity ,Neuroscience ,030217 neurology & neurosurgery - Abstract
Asynchrony among synaptic inputs may prevent a neuron from responding to behaviorally relevant sensory stimuli. For example, "octopus cells" are monaural neurons in the auditory brainstem of mammals that receive input from auditory nerve fibers (ANFs) representing a broad band of sound frequencies. Octopus cells are known to respond with finely timed action potentials at the onset of sounds despite the fact that due to the traveling wave delay in the cochlea, synaptic input from the auditory nerve is temporally diffuse. This paper provides a proof of principle that the octopus cells' dendritic delay may provide compensation for this input asynchrony, and that synaptic weights may be adjusted by a spike-timing dependent plasticity (STDP) learning rule. This paper used a leaky integrate and fire model of an octopus cell modified to include a "rate threshold," a property that is known to create the appropriate onset response in octopus cells. Repeated audio click stimuli were passed to a realistic auditory nerve model which provided the synaptic input to the octopus cell model. A genetic algorithm was used to find the parameters of the STDP learning rule that reproduced the microscopically observed synaptic connectivity. With these selected parameter values it was shown that the STDP learning rule was capable of adjusting the values of a large number of input synaptic weights, creating a configuration that compensated the traveling wave delay of the cochlea.
- Published
- 2018
15. A Retinotopic Spiking Neural Network System for Accurate Recognition of Moving Objects Using NeuCube and Dynamic Vision Sensors
- Author
-
Lukas Paulun, Nikola Kasabov, and Anne Wendt
- Subjects
Computer science ,Neuroscience (miscellaneous) ,NeuCube ,02 engineering and technology ,Spiking neural networks (SNN) ,dynamic vision sensor (DVS) ,Retinal ganglion ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,MNIST-DVS ,Methods ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,retinotopy ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,deep learning in SNN ,Spiking neural network ,business.industry ,Supervised learning ,Pattern recognition ,Visual cortex ,medicine.anatomical_structure ,Retinotopy ,Unsupervised learning ,020201 artificial intelligence & image processing ,Spike (software development) ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,MNIST database ,Neuroscience - Abstract
This paper introduces a new system for dynamic visual recognition that combines bio-inspired hardware with a brain-like spiking neural network. The system is designed to take data from a dynamic vision sensor (DVS) that simulates the functioning of the human retina by producing an address event output (spike trains) based on the movement of objects. The system then convolutes the spike trains and feeds them into a brain-like spiking neural network, called NeuCube, which is organized in a three-dimensional manner, representing the organization of the primary visual cortex. Spatio-temporal patterns of the data are learned during a deep unsupervised learning stage, using spike-timing-dependent plasticity. In a second stage, supervised learning is performed to train the network for classification tasks. The convolution algorithm and the mapping into the network mimic the function of retinal ganglion cells and the retinotopic organization of the visual cortex. The NeuCube architecture can be used to visualize the deep connectivity inside the network before, during, and after training and thereby allows for a better understanding of the learning processes. The method was tested on the benchmark MNIST-DVS dataset and achieved a classification accuracy of 92.90%. The paper discusses advantages and limitations of the new method and concludes that it is worth exploring further on different datasets, aiming for advances in dynamic computer vision and multimodal systems that integrate visual, aural, tactile, and other kinds of information in a biologically plausible way.
- Published
- 2018
16. Granular layEr Simulator: Design and Multi-GPU Simulation of the Cerebellar Granular Layer.
- Author
-
Florimbi, Giordana, Torti, Emanuele, Masoli, Stefano, D'Angelo, Egidio, and Leporati, Francesco
- Subjects
GRAPHICS processing units ,HIGH performance computing ,MATHEMATICAL models ,CEREBELLAR cortex - Abstract
In modern computational modeling, neuroscientists need to reproduce long-lasting activity of large-scale networks, where neurons are described by highly complex mathematical models. These aspects strongly increase the computational load of the simulations, which can be efficiently performed by exploiting parallel systems to reduce the processing times. Graphics Processing Unit (GPU) devices meet this need providing on desktop High Performance Computing. In this work, authors describe a novel Granular layEr Simulator development implemented on a multi-GPU system capable of reconstructing the cerebellar granular layer in a 3D space and reproducing its neuronal activity. The reconstruction is characterized by a high level of novelty and realism considering axonal/dendritic field geometries, oriented in the 3D space, and following convergence/divergence rates provided in literature. Neurons are modeled using Hodgkin and Huxley representations. The network is validated by reproducing typical behaviors which are well-documented in the literature, such as the center-surround organization. The reconstruction of a network, whose volume is 600 × 150 × 1,200 μm
3 with 432,000 granules, 972 Golgi cells, 32,399 glomeruli, and 4,051 mossy fibers, takes 235 s on an Intel i9 processor. The 10 s activity reproduction takes only 4.34 and 3.37 h exploiting a single and multi-GPU desktop system (with one or two NVIDIA RTX 2080 GPU, respectively). Moreover, the code takes only 3.52 and 2.44 h if run on one or two NVIDIA V100 GPU, respectively. The relevant speedups reached (up to ~38× in the single-GPU version, and ~55× in the multi-GPU) clearly demonstrate that the GPU technology is highly suitable for realistic large network simulations. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
17. Signal Transmission of Biological Reaction-Diffusion System by Using Synchronization
- Author
-
Lingli Zhou and Jianwei Shen
- Subjects
0301 basic medicine ,Correctness ,Computer science ,Vesicle docking ,structure adaptation ,Neuroscience (miscellaneous) ,02 engineering and technology ,reaction-diffusion system ,Synchronization ,lcsh:RC321-571 ,random walk ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0202 electrical engineering, electronic engineering, information engineering ,Diffusion (business) ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Simulation ,Original Research ,signal transmission ,020208 electrical & electronic engineering ,Process (computing) ,Random walk ,Langevin equation ,030104 developmental biology ,Transmission (telecommunications) ,diffusion coupling ,Biological system ,synchronization ,Neuroscience - Abstract
Molecular signal transmission in cell is very crucial for information exchange. How to understand its transmission mechanism has attracted many researchers. In this paper, we prove that signal transmission problem between neural tumor molecules and drug molecules can be achieved by synchronous control. To achieve our purpose, we derive the Fokker-Plank equation by using the Langevin equation and theory of random walk, this is a model which can express the concentration change of neural tumor molecules. Second, according to the biological character that vesicles in cell can be combined with cell membrane to release the cargo which plays a role of signal transmission, we preliminarily analyzed the mechanism of tumor-drug molecular interaction. Third, we propose the view of synchronous control which means the process of vesicle docking with their target membrane is a synchronization process, and we can achieve the precise treatment of disease by using synchronous control. We believe this synchronous control mechanism is reasonable and two examples are given to illustrate the correctness of our results obtained in this paper.
- Published
- 2017
18. Eliminating Absence Seizures through the Deep Brain Stimulation to Thalamus Reticular Nucleus
- Author
-
Qingyun Wang and Zhihui Wang
- Subjects
0301 basic medicine ,Deep brain stimulation ,Typical absence ,medicine.medical_treatment ,Thalamus ,Neuroscience (miscellaneous) ,absence seizures ,Stimulation ,mean-field model ,GABAB receptor ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Epilepsy ,0302 clinical medicine ,medicine ,Original Research ,dynamical transition ,medicine.disease ,deep brain stimulation ,030104 developmental biology ,medicine.anatomical_structure ,Reticular connective tissue ,spike and slow-wave discharges ,Psychology ,Neuroscience ,Nucleus ,030217 neurology & neurosurgery - Abstract
Deep brain stimulation (DBS) can play a crucial role in the modulation of absence seizures, yet relevant biophysical mechanisms are not completely established. In this paper, on the basis of a biophysical mean-field model, we investigate a typical absence epilepsy activity by introducing slow kinetics of GABAB receptors on thalamus reticular nucleus (TRN). We find that the region of spike and slow-wave discharges (SWDs) can be reduced greatly when we add the DBS to TRN. Furthermore, we systematically explore how the corresponding stimulation parameters including frequency, amplitude and positive input duration suppress the SWDs under certain conditions. It is shown that the SWDs can be controlled as key stimulation parameters are suitably chosen. The results in this paper can be helpful for researchers to understand the thalamus stimulation in treating epilepsy patients, and provide theoretical basis for future experimental and clinical studies.
- Published
- 2017
- Full Text
- View/download PDF
19. Emergence of Stable Synaptic Clusters on Dendrites Through Synaptic Rewiring.
- Author
-
Limbacher, Thomas and Legenstein, Robert
- Subjects
RECOLLECTION (Psychology) ,PYRAMIDAL neurons ,NEUROPLASTICITY ,COMPUTER simulation ,DENDRITES ,ENTORHINAL cortex - Abstract
The connectivity structure of neuronal networks in cortex is highly dynamic. This ongoing cortical rewiring is assumed to serve important functions for learning and memory. We analyze in this article a model for the self-organization of synaptic inputs onto dendritic branches of pyramidal cells. The model combines a generic stochastic rewiring principle with a simple synaptic plasticity rule that depends on local dendritic activity. In computer simulations, we find that this synaptic rewiring model leads to synaptic clustering, that is, temporally correlated inputs become locally clustered on dendritic branches. This empirical finding is backed up by a theoretical analysis which shows that rewiring in our model favors network configurations with synaptic clustering. We propose that synaptic clustering plays an important role in the organization of computation and memory in cortical circuits: we find that synaptic clustering through the proposed rewiring mechanism can serve as a mechanism to protect memories from subsequent modifications on a medium time scale. Rewiring of synaptic connections onto specific dendritic branches may thus counteract the general problem of catastrophic forgetting in neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. The Neuroscience of Spatial Navigation and the Relationship to Artificial Intelligence.
- Author
-
Bermudez-Contreras, Edgar, Clark, Benjamin J., and Wilber, Aaron
- Subjects
NEUROSCIENCES ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,REINFORCEMENT learning ,BIG data ,COMPUTER software ,AUTOMOTIVE navigation systems ,BRAIN-computer interfaces - Abstract
Recent advances in artificial intelligence (AI) and neuroscience are impressive. In AI, this includes the development of computer programs that can beat a grandmaster at GO or outperform human radiologists at cancer detection. A great deal of these technological developments are directly related to progress in artificial neural networks—initially inspired by our knowledge about how the brain carries out computation. In parallel, neuroscience has also experienced significant advances in understanding the brain. For example, in the field of spatial navigation, knowledge about the mechanisms and brain regions involved in neural computations of cognitive maps—an internal representation of space—recently received the Nobel Prize in medicine. Much of the recent progress in neuroscience has partly been due to the development of technology used to record from very large populations of neurons in multiple regions of the brain with exquisite temporal and spatial resolution in behaving animals. With the advent of the vast quantities of data that these techniques allow us to collect there has been an increased interest in the intersection between AI and neuroscience, many of these intersections involve using AI as a novel tool to explore and analyze these large data sets. However, given the common initial motivation point—to understand the brain—these disciplines could be more strongly linked. Currently much of this potential synergy is not being realized. We propose that spatial navigation is an excellent area in which these two disciplines can converge to help advance what we know about the brain. In this review, we first summarize progress in the neuroscience of spatial navigation and reinforcement learning. We then turn our attention to discuss how spatial navigation has been modeled using descriptive, mechanistic, and normative approaches and the use of AI in such models. Next, we discuss how AI can advance neuroscience, how neuroscience can advance AI, and the limitations of these approaches. We finally conclude by highlighting promising lines of research in which spatial navigation can be the point of intersection between neuroscience and AI and how this can contribute to the advancement of the understanding of intelligent behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Integrating Computational and Neural Findings in Visual Object Perception
- Author
-
Hans Op de Beeck, Rainer Goebel, Judith C. Peters, Vision, RS: FPN CN 1, and Netherlands Institute for Neuroscience (NIN)
- Subjects
0301 basic medicine ,Computer science ,Computer Vision ,Neuroscience (miscellaneous) ,Machine learning ,computer.software_genre ,Field (computer science) ,object recognition ,Task (project management) ,lcsh:RC321-571 ,Visual processing ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,invariance ,Set (psychology) ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,business.industry ,fMRI ,Cognitive neuroscience of visual object recognition ,Object (computer science) ,Feature representation ,Editorial ,030104 developmental biology ,ventral visual pathway ,Artificial intelligence ,business ,Neural coding ,Heuristics ,computer ,030217 neurology & neurosurgery ,Neuroscience - Abstract
Recognizing objects despite infinite variations in their appearance is a highly challenging computational task the visual system performs in a remarkably fast, accurate, and robust fashion. The complexity of the underlying mechanisms is reflected in the large proportion of cortical real-estate dedicated to visual processing, as well as in the difficulties encountered when trying to build models whose performance matches human proficiency. The articles in this Research Topic provide an overview of recent advances in our understanding of the neural mechanisms underlying visual object perception, focusing on integrative approaches which encompass both computational and empirical work. Given the vast expanse of topics covered in the discipline of computational visual neuroscience, it is impossible to provide a comprehensive overview of the field's status-quo. Instead, the presented papers highlight interesting extensions to existing models and novel insights into computational principles and their neural underpinnings. Contributions could be coarsely subdivided into three different sections: Two papers focused on implementing biologically-valid learning rules and heuristics in well-established neural models of the visual pathway (i.e., “VisNet” and “HMAX”) to improve flexible object recognition. Three other studies investigated the role of sparseness, selectivity, and correlation in optimizing neural coding of object features. Finally, another set of contributions focused on integrating computational vision models and human brain responses to gain more insights in the computational mechanisms underlying neural object representations.
- Published
- 2016
22. Diversity priors for learning early visual features
- Author
-
Sandor Szedmak, Hanchen Xiong, Justus Piater, and Antonio Jose Rodríguez-Sánchez
- Subjects
Computer science ,Markov networks ,Population ,Neuroscience (miscellaneous) ,Boltzmann machine ,Machine learning ,computer.software_genre ,diversity prior ,lcsh:RC321-571 ,Cellular and Molecular Neuroscience ,Similarity (psychology) ,Prior probability ,education ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,V1 simple cell ,Restricted Boltzmann machine ,education.field_of_study ,Artificial neural network ,business.industry ,inhibition ,Conditional independence ,Receptive field ,Artificial intelligence ,business ,computer ,restricted Boltzmann machine ,Neuroscience - Abstract
This paper investigates how utilizing diversity priors can discover early visual features that resemble their biological counterparts. The study is mainly motivated by the sparsity and selectivity of activations of visual neurons in area V1. Most previous work on computational modeling emphasizes selectivity or sparsity independently. However, we argue that selectivity and sparsity are just two epiphenomena of the diversity of receptive fields, which has been rarely exploited in learning. In this paper, to verify our hypothesis, restricted Boltzmann machines (RBMs) are employed to learn early visual features by modeling the statistics of natural images. Considering RBMs as neural networks, the receptive fields of neurons are formed by the inter-weights between hidden and visible nodes. Due to the conditional independence in RBMs, there is no mechanism to coordinate the activations of individual neurons or the whole population. A diversity prior is introduced in this paper for training RBMs. We find that the diversity prior indeed can assure simultaneously sparsity and selectivity of neuron activations. The learned receptive fields yield a high degree of biological similarity in comparison to physiological data. Also, corresponding visual features display a good generative capability in image reconstruction.
- Published
- 2015
- Full Text
- View/download PDF
23. Optimum neural tuning curves for information efficiency with rate coding and finite-time window
- Author
-
Xiaojuan Sun, Zhijie Wang, Hong Fan, and Fang Han
- Subjects
neural tuning curve ,Quantitative Biology::Neurons and Cognition ,Computer science ,finite-time window ,Neuroscience (miscellaneous) ,Energy consumption ,Mutual information ,Stimulus (physiology) ,computer.software_genre ,logistic function ,Information efficiency ,lcsh:RC321-571 ,Normal distribution ,Cellular and Molecular Neuroscience ,Entropy (information theory) ,information efficiency ,Data mining ,Logistic function ,Neural coding ,rate coding ,Algorithm ,computer ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,Neuroscience - Abstract
An important question for neural encoding is what kind of neural systems can convey more information with less energy within a finite time coding window. This paper first proposes a finite-time neural encoding system, where the neurons in the system respond to a stimulus by a sequence of spikes that is assumed to be Poisson process and the external stimuli obey normal distribution. A method for calculating the mutual information of the finite-time neural encoding system is proposed and the definition of information efficiency is introduced. The values of the mutual information and the information efficiency obtained by using Logistic function are compared with those obtained by using other functions and it is found that Logistic function is the best one. It is further found that the parameter representing the steepness of the Logistic function has close relationship with full entropy, and that the parameter representing the translation of the function associates with the energy consumption and noise entropy tightly. The optimum parameter combinations for Logistic function to maximize the information efficiency are calculated when the stimuli and the properties of the encoding system are varied respectively. Some explanations for the results are given. The model and the method we proposed could be useful to study neural encoding system, and the optimum neural tuning curves obtained in this paper might exhibit some characteristics of a real neural system.
- Published
- 2015
- Full Text
- View/download PDF
24. Synchronization dynamics and evidence for a repertoire of network states in resting EEG
- Author
-
Michal Hadrava and Jaroslav Hlinka
- Subjects
Microstates ,Stationary process ,Computer science ,Neuroscience (miscellaneous) ,Context (language use) ,lcsh:RC321-571 ,Cellular and Molecular Neuroscience ,resting-state ,EEG ,Spurious relationship ,Cluster analysis ,white noise ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,General Commentary Article ,Independence (probability theory) ,nonstationary connectivity ,Stochastic process ,business.industry ,White noise ,dynamics ,Noise ,networks ,stationarity ,Artificial intelligence ,business ,Algorithm ,Neuroscience - Abstract
The general idea of nonstationarity of brain activity or dependence of the dynamics on some, potentially unobserved, temporally changing or fluctuating parameter, has been familiar in the neuroscience community in contexts such as sleep dynamics or epileptology for a long time. However, recently it has been attracting increasing attention in the context of functional brain network analysis. This seems as a natural development of the field—once that functional connectivity as computed under the simplifying stationarity assumption has been well established, it is only logical to try to detect changes in brain functional connectivity over time. In general, detecting such nonstationarities in a reliable fashion is a methodologically challenging task, as changes in estimates of functional connectivity over time may be also due to random fluctuations, rather than genuine changes of the process. There is a wide array of approaches to studying such nonstationarities documented in literature (Hutchison et al., 2013), and an important but often neglected general methodological step is assessing the results against an appropriate null model corresponding to stationary process. In the following, we give an illustrative example of how a typical nonstationarity analysis can generate spurious signs of nonstationary dynamics even when applied to stationary process. To show that this is not a purely theoretical issue, we closely follow the analysis procedure used in a recently published study by Betzel et al. (2012). We note that this particular paper have caught our attention by coincidence, while we believe the issue is pertinent to a substantial fraction of the literature. In their paper, (Betzel et al., 2012) deal with characterizing the dynamics of brain activity measured by EEG. In particular, Betzel et al. report the detection of rapid transitions between intermittently stable states, explicitly saying that “As predicted, fast (~100 ms) dynamics of whole-brain synchronization were observed during resting-state EEG,” documenting the typical fast (~100 ms) time scale of these states in Figure 6B of their paper (see also Figures 4, 5). Their argument is based on the following data-processing scheme: First, for each time point of filtered EEG data, a functional connectivity matrix is computed using pairwise synchronization likelihood values and the time points are clustered based on similarity of the corresponding functional connectivity matrices. Next, contiguous stretches of time points that are members of the same cluster are interpreted as corresponding to a duration of an atomic brain state. Finally, the brain-state-representing functional connectivity matrices are pooled across subjects and clustered based on their similarity to define higher-order states. Notably, the procedure applied by Betzel et al. is principally data-driven, rather than relying on some model testing or assumptions, and it includes band-pass filtering and sliding-window-like analysis. We therefore conjectured that the temporal structure of the observed functional connectivity dynamics might have been crucially affected by the procedure itself (as the authors tentatively admitted in their discussion, albeit unfortunately have not tested the results against stationary model data). To explore the viability of this alternative explanation, we applied a processing pipeline built according to the description given in the original manuscript to model data, consisting of 100 samples (each of length T = 2500 time points, representing mock 5 s epoch of EEG data) of a multivariate (N = 20) white noise process. The applied processing steps included application of frequency filtering (using elliptic filters corresponding to the four specified frequency bands; we applied zero-phase digital filtering by processing the input data in both the forward and reverse directions) and subsequent computation of the synchronization likelihood (Stam and van Dijk, 2002). The parameters of the synchronization likelihood l, m, w1, w2, nrec were set for each frequency band as in Betzel et al. (2012). The resulting functional connectivity matrices were clustered using the standard k-means clustering method (Lloyd, 1982). In Figure Figure11 you see that the typical duration of detected states closely corresponds to the distributions observed in the original paper (compare with Figures 6B, 4A,B in Betzel et al., 2012). In particular, the typical timescale is in the order of tens to hundreds of ms. Also, this time scale depends on the selected filtering in the same way as in the original work, with the time scales of the beta and theta bands markedly shorter and longer, respectively, than those of the broadband and alpha bands, the latter two being relatively close to each other. Figure 1 Temporal dynamics of synchronization likelihood (SL) networks generated from realizations of stationary processes: white noise (A,C,E) and correlated noise [linear stationary (FFT) surrogates from EEG data] (B,D,F). The top, middle and bottom images were ... Even though spatially and temporally independent (white) noise model used here is clearly not a realistic model for EEG data; such a simplistic stationary model reproduces the clustering time scales of the original paper with a surprising accuracy. Of course, due to spatial independence of the processes, it does not reproduce the spatiotemporal patterns corresponding to Figures 4A,B in Betzel et al. (2012). We have further repeated the procedure using multivariate Fourier transform surrogates generated from a single segment of EEG data (for more details on the data see Horacek et al., 2010. Such surrogates correspond to realizations of linear stationary process with conserved auto- and cross-correlation structure, see Prichard and Theiler, 1994). The results are shown in the right column of Figure Figure1.1. Moving from white noise to EEG surrogates, the time scales of the observed clustering hardly changed. However, as expected due to the introduced spatial dependence, the EEG surrogates show now a patchy spatiotemporal pattern Figures 1D,F more closely corresponding to those in the original paper. The similarity of the spatiotemporal patterns is of course only qualitative—range of differences may have arisen due to combination of different acquisition parameters as well as intra- and inter-individual variability. Note that we applied the basic k-means clustering method instead of the evolutionary-clustering algorithm from the original paper; insufficient detail of description of the procedure in the original paper made reproducing it prohibitively difficult. The value k = 3 was chosen for display of the clustering results, however, the results proved to be quite insensitive to the choice of k. Our numerical simulation above focused particularly on the observed time scales of the network states as obtained with the described analysis approach. One could indeed ask further, what evidence regarding “repertoire of states” can be provided by the detection of clusters per se—and whether the detection of (some) clusters could be merely a consequence of running a clustering algorithm. For a k-means clustering, the answer is obvious. Even for more complex approaches without fixed number of clusters such as the approach of Betzel et al. (2012), we conjecture that a repertoire could be observed even for a stationary process, however this depends on the details of applied analysis approach. In summary, we aimed to illustrate the proposition that spurious nonstationarity manifesting itself as alternation of network states may appear due to methodological issues even in stationary processes such as white noise. In our example, we showed that for instance the observation of clustering of time points (more precisely, temporal windows) into consecutive clusters (“states”) of duration in the order of several hundred milliseconds (the time scale of putative brain microstates) might be reproduced by white noise to a remarkable detail. Of course, this does not disprove the existence of such states—it just suggests the evidence may not be sufficient. From a wider perspective, one could see a parallel here with other examples of data analysis approaches that may lead to spurious observation of intriguing structures due to intrinsic bias of the methods—such as apparent signs of chaos in power-law spectra stochastic processes (Osborne and Provenzale, 1989) or small-world properties of functional connectivity graphs (Hlinka et al., 2012). Or, from an experimental point of view, with the role measurement artifacts such those as due to head motion might play in observed network properties (Hlinka et al., 2010; van Dijk et al., 2012).
- Published
- 2015
25. Sparsey™: event recognition via deep hierarchical sparse distributed codes
- Author
-
Gerard J. Rinkus
- Subjects
FOS: Computer and information sciences ,Theoretical computer science ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,sparse distributed codes ,Computer Science - Computer Vision and Pattern Recognition ,Neuroscience (miscellaneous) ,Inference ,time warp invariance ,cortical hierarchy ,Hierarchical database model ,Cellular and Molecular Neuroscience ,medicine ,Neural and Evolutionary Computing (cs.NE) ,Original Research Article ,Invariant (mathematics) ,critical periods ,Computer Science - Neural and Evolutionary Computing ,event recognition ,deep learning ,Content-addressable memory ,sequence recognition ,Visual cortex ,medicine.anatomical_structure ,Quantitative Biology - Neurons and Cognition ,FOS: Biological sciences ,Spare part ,Scalability ,Neurons and Cognition (q-bio.NC) ,Coding (social sciences) ,Neuroscience - Abstract
Visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale, more complex spatiotemporal features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field, which we equate with the cortical macrocolumn (mac), at each level. In localism, each represented feature/event (item) is coded by a single unit. Our model, Sparsey, is also hierarchical but crucially, uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. SDCs of different items can overlap and the size of overlap between items can represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to huge datasets. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns., Comment: This is a manuscript form of a paper published in Frontiers in Computational Neuroscience in 2014 (http://dx.doi.org/10.3389/fncom.2014.00160). 65 pages, 28 figures, 8 tables
- Published
- 2014
26. Deep Learning for Autism Diagnosis and Facial Analysis in Children
- Author
-
Mohammad-Parsa Hosseini, Madison Beary, Alex Hadsell, Ryan Messersmith, and Hamid Soltanian-Zadeh
- Subjects
Cellular and Molecular Neuroscience ,children ,diagnosis ,Neuroscience (miscellaneous) ,deep learning ,autism ,facial image analysis ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Neuroscience ,Original Research ,RC321-571 - Abstract
In this paper, we introduce a deep learning model to classify children as either healthy or potentially having autism with 94.6% accuracy using Deep Learning. Patients with autism struggle with social skills, repetitive behaviors, and communication, both verbal and non-verbal. Although the disease is considered to be genetic, the highest rates of accurate diagnosis occur when the child is tested on behavioral characteristics and facial features. Patients have a common pattern of distinct facial deformities, allowing researchers to analyze only an image of the child to determine if the child has the disease. While there are other techniques and models used for facial analysis and autism classification on their own, our proposal bridges these two ideas allowing classification in a cheaper, more efficient method. Our deep learning model uses MobileNet and two dense layers to perform feature extraction and image classification. The model is trained and tested using 3,014 images, evenly split between children with autism and children without it; 90% of the data is used for training and 10% is used for testing. Based on our accuracy, we propose that the diagnosis of autism can be done effectively using only a picture. Additionally, there may be other diseases that are similarly diagnosable.
- Published
- 2022
27. Image Quality Evaluation of Light Field Image Based on Macro-Pixels and Focus Stack
- Author
-
Chunli Meng, Ping An, Xinpeng Huang, Chao Yang, and Yilei Chen
- Subjects
macro-pixels ,Cellular and Molecular Neuroscience ,Computer Science::Computer Vision and Pattern Recognition ,corner ,Neuroscience (miscellaneous) ,light field ,objective image quality assessment ,focus stack ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Neuroscience ,Original Research ,RC321-571 - Abstract
Due to the complex angular-spatial structure, light field (LF) image processing faces more opportunities and challenges than ordinary image processing. The angular-spatial structure loss of LF images can be reflected from their various representations. The angular and spatial information penetrate each other, so it is necessary to extract appropriate features to analyze the angular-spatial structure loss of distorted LF images. In this paper, a LF image quality evaluation model, namely MPFS, is proposed based on the prediction of global angular-spatial distortion of macro-pixels and the evaluation of local angular-spatial quality of the focus stack. Specifically, the angular distortion of the LF image is first evaluated through the luminance and chrominance of macro-pixels. Then, we use the saliency of spatial texture structure to pool an array of predicted values of angular distortion to obtain the predicted value of global distortion. Secondly, the local angular-spatial quality of the LF image is analyzed through the principal components of the focus stack. The focalizing structure damage caused by the angular-spatial distortion is calculated using the features of corner and texture structures. Finally, the global and local angular-spatial quality evaluation models are combined to realize the evaluation of the overall quality of the LF image. Extensive comparative experiments show that the proposed method has high efficiency and precision.
- Published
- 2022
28. Free-Energy Model of Emotion Potential: Modeling Arousal Potential as Information Content Induced by Complexity and Novelty
- Author
-
Hideyoshi Yanagisawa
- Subjects
Computer science ,Neuroscience (miscellaneous) ,emotion ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Stimulus (physiology) ,Machine learning ,computer.software_genre ,Arousal ,Cellular and Molecular Neuroscience ,Bayes' theorem ,arousal ,Hypothesis and Theory ,gaussian generative models ,uncertainty ,bayes ,business.industry ,Work (physics) ,Novelty ,Variance (accounting) ,free energy ,Generative model ,Artificial intelligence ,business ,computer ,Energy (signal processing) ,Neuroscience ,RC321-571 - Abstract
Appropriate levels of arousal potential induce hedonic responses (i.e., emotional valence). However, the relationship between arousal potential and its factors (e.g., novelty, complexity, and uncertainty) have not been formalized. This paper proposes a mathematical model that explains emotional arousal using minimized free energy to represent information content processed in the brain after sensory stimuli are perceived and recognized (i.e., sensory surprisal). This work mathematically demonstrates that sensory surprisal represents the summation of information from novelty and uncertainty, and that the uncertainty converges to perceived complexity with sufficient sampling from a stimulus source. Novelty, uncertainty, and complexity all act as collative properties that form arousal potential. Analysis using a Gaussian generative model shows that the free energy is formed as a quadratic function of prediction errors based on the difference between prior expectation and peak of likelihood. The model predicts two interaction effects on free energy: that between prediction error and prior uncertainty (i.e., prior variance) and that between prediction error and sensory variance. A discussion on the potential of free energy as a mathematical principle is presented to explain emotion initiators. The model provides a general mathematical framework for understanding and predicting the emotions caused by novelty, uncertainty, and complexity. The mathematical model of arousal can help predict acceptable novelty and complexity based on a target population under different uncertainty levels mitigated by prior knowledge and experience.
- Published
- 2021
29. Discriminative Dictionary Learning for Autism Spectrum Disorder Identification
- Author
-
Wenbo Liu, Ming Li, Xiaobing Zou, and Bhiksha Raj
- Subjects
Computer science ,Neuroscience (miscellaneous) ,autism spectrum disorder ,Neurosciences. Biological psychiatry. Neuropsychiatry ,mode seeking ,Interpersonal communication ,computer.software_genre ,Cellular and Molecular Neuroscience ,Discriminative model ,mental disorders ,Methods ,medicine ,business.industry ,medicine.disease ,Class (biology) ,Identification (information) ,machine learning ,Autism spectrum disorder ,Eye tracking ,Autism ,discriminative dictionary learning ,Artificial intelligence ,eye gaze ,Abnormality ,business ,computer ,Natural language processing ,Neuroscience ,RC321-571 - Abstract
Autism Spectrum Disorder (ASD) is a group of lifelong neurodevelopmental disorders with complicated causes. A key symptom of ASD patients is their impaired interpersonal communication ability. Recent study shows that face scanning patterns of individuals with ASD are often different from those of typical developing (TD) ones. Such abnormality motivates us to study the feasibility of identifying ASD children based on their face scanning patterns with machine learning methods. In this paper, we consider using the bag-of-words (BoW) model to encode the face scanning patterns, and propose a novel dictionary learning method based on dual mode seeking for better BoW representation. Unlike k-means which is broadly used in conventional BoW models to learn dictionaries, the proposed method captures discriminative information by finding atoms which maximizes both the purity and coverage of belonging samples within one class. Compared to the rich literature of ASD studies from psychology and neural science, our work marks one of the relatively few attempts to directly identify high-functioning ASD children with machine learning methods. Experiments demonstrate the superior performance of our method with considerable gain over several baselines. Although the proposed work is yet too preliminary to directly replace existing autism diagnostic observation schedules in the clinical practice, it shed light on future applications of machine learning methods in early screening of ASD.
- Published
- 2021
30. Effective Plug-Ins for Reducing Inference-Latency of Spiking Convolutional Neural Networks During Inference Phase
- Author
-
Xiaopeng Yuan, Gaoming Fu, Hongbing Pan, Yan Feng, Tao Yue, Yuanyong Luo, Yuxuan Wang, and Chen Xuan
- Subjects
Normalization (statistics) ,spiking network conversion ,Computer science ,Neuroscience (miscellaneous) ,Inference ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Convolutional neural network ,spiking neural network ,Cellular and Molecular Neuroscience ,Original Research ,Spiking neural network ,Network architecture ,Artificial neural network ,business.industry ,Deep learning ,deep networks ,deep learning ,Pattern recognition ,object classification ,Artificial intelligence ,business ,MNIST database ,artificial neural network ,Neuroscience ,inference-latency ,RC321-571 - Abstract
Convolutional Neural Networks (CNNs) are effective and mature in the field of classification, while Spiking Neural Networks (SNNs) are energy-saving for their sparsity of data flow and event-driven working mechanism. Previous work demonstrated that CNNs can be converted into equivalent Spiking Convolutional Neural Networks (SCNNs) without obvious accuracy loss, including different functional layers such as Convolutional (Conv), Fully Connected (FC), Avg-pooling, Max-pooling, and Batch-Normalization (BN) layers. To reduce inference-latency, existing researches mainly concentrated on the normalization of weights to increase the firing rate of neurons. There are also some approaches during training phase or altering the network architecture. However, little attention has been paid on the end of inference phase. From this new perspective, this paper presents 4 stopping criterions as low-cost plug-ins to reduce the inference-latency of SCNNs. The proposed methods are validated using MATLAB and PyTorch platforms with Spiking-AlexNet for CIFAR-10 dataset and Spiking-LeNet-5 for MNIST dataset. Simulation results reveal that, compared to the state-of-the-art methods, the proposed method can shorten the average inference-latency of Spiking-AlexNet from 892 to 267 time steps (almost 3.34 times faster) with the accuracy decline from 87.95 to 87.72%. With our methods, 4 types of Spiking-LeNet-5 only need 24–70 time steps per image with the accuracy decline not more than 0.1%, while models without our methods require 52–138 time steps, almost 1.92 to 3.21 times slower than us.
- Published
- 2021
31. Toward an Integration of Deep Learning and Neuroscience.
- Author
-
Marblestone, Adam H., Wayne, Greg, and Kording, Konrad P.
- Subjects
NEUROSCIENCES ,MACHINE learning ,NEURAL circuitry ,COMPUTER network architectures ,MATHEMATICAL optimization - Abstract
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain’s specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
32. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas.
- Author
-
Lescroart, Mark D., Stansbury, Dustin E., Gallant, Jack L., Baker, Chris I., and Ko Sakai
- Subjects
VISUAL perception ,HIPPOCAMPUS (Brain) ,BRAIN function localization ,FUNCTIONAL magnetic resonance imaging ,COMPUTATIONAL neuroscience - Abstract
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
33. Cerebral Microbleed Detection
- Author
-
Siyuan, Lu, Shuaiqi, Liu, Shui-Hua, Wang, and Yu-Dong, Zhang
- Subjects
extreme learning machine ,deep learning ,convolutional neural network ,computer-aided diagnosis ,bat algorithm ,Neuroscience ,Original Research - Abstract
Aim: Cerebral microbleeds (CMBs) are small round dots distributed over the brain which contribute to stroke, dementia, and death. The early diagnosis is significant for the treatment. Method: In this paper, a new CMB detection approach was put forward for brain magnetic resonance images. We leveraged a sliding window to obtain training and testing samples from input brain images. Then, a 13-layer convolutional neural network (CNN) was designed and trained. Finally, we proposed to utilize an extreme learning machine (ELM) to substitute the last several layers in the CNN for detection. We carried out an experiment to decide the optimal number of layers to be substituted. The parameters in ELM were optimized by a heuristic algorithm named bat algorithm. The evaluation of our approach was based on hold-out validation, and the final predictions were generated by averaging the performance of five runs. Results: Through the experiments, we found replacing the last five layers with ELM can get the optimal results. Conclusion: We offered a comparison with state-of-the-art algorithms, and it can be revealed that our method was accurate in CMB detection.
- Published
- 2021
34. Sparse Granger Causality Analysis Model Based on Sensors Correlation for Emotion Recognition Classification in Electroencephalography
- Author
-
Dongwei Chen, Rui Miao, Zhaoyong Deng, Na Han, and Chunjian Deng
- Subjects
Computer science ,granger causality analysis ,Feature extraction ,SC-SGA ,Neuroscience (miscellaneous) ,EEG sensors ,Neurosciences. Biological psychiatry. Neuropsychiatry ,LASSO ,050105 experimental psychology ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Lasso (statistics) ,Granger causality ,Classifier (linguistics) ,Prior probability ,L1/2-based sparse granger causality analysis ,0501 psychology and cognitive sciences ,Affective computing ,Original Research ,business.industry ,05 social sciences ,Pattern recognition ,Causality ,L2 norm logistic regression ,Noise (video) ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Neuroscience ,RC321-571 - Abstract
In recent years, affective computing based on electroencephalogram (EEG) data has attracted increased attention. As a classic EEG feature extraction model, Granger causality analysis has been widely used in emotion classification models, which construct a brain network by calculating the causal relationships between EEG sensors and select the key EEG features. Traditional EEG Granger causality analysis uses the L2 norm to extract features from the data, and so the results are susceptible to EEG artifacts. Recently, several researchers have proposed Granger causality analysis models based on the least absolute shrinkage and selection operator (LASSO) and the L1/2 norm to solve this problem. However, the conventional sparse Granger causality analysis model assumes that the connections between each sensor have the same prior probability. This paper shows that if the correlation between the EEG data from each sensor can be added to the Granger causality network as prior knowledge, the EEG feature selection ability and emotional classification ability of the sparse Granger causality model can be enhanced. Based on this idea, we propose a new emotional computing model, named the sparse Granger causality analysis model based on sensor correlation (SC-SGA). SC-SGA integrates the correlation between sensors as prior knowledge into the Granger causality analysis based on the L1/2 norm framework for feature extraction, and uses L2 norm logistic regression as the emotional classification algorithm. We report the results of experiments using two real EEG emotion datasets. These results demonstrate that the emotion classification accuracy of the SC-SGA model is better than that of existing models by 2.46–21.81%.
- Published
- 2021
35. HTsort: Enabling Fast and Accurate Spike Sorting on Multi-Electrode Arrays
- Author
-
Nenggan Zheng, Keming Chen, Hui Hong, Yangtao Jiang, Zhanxiong Wu, and Haochuan Wang
- Subjects
0301 basic medicine ,Computer science ,Pipeline (computing) ,Neuroscience (miscellaneous) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,spike sorting ,Background noise ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Robustness (computer science) ,Cluster analysis ,Original Research ,business.industry ,multi-electrode arrays ,Template matching ,Pattern recognition ,template match ,Noise ,030104 developmental biology ,Spike sorting ,Spike (software development) ,Artificial intelligence ,overlapping spikes ,business ,030217 neurology & neurosurgery ,Neuroscience ,clustering ,HDBSCAN ,RC321-571 - Abstract
Spike sorting is used to classify the spikes (action potentials acquired by physiological electrodes), aiming to identify their respective firing units. Now it has been developed to classify the spikes recorded by multi-electrode arrays (MEAs), with the improvement of micro-electrode technology. However, how to improve classification accuracy and maintain low time complexity simultaneously becomes a difficulty. A fast and accurate spike sorting approach named HTsort is proposed for high-density multi-electrode arrays in this paper. Several improvements have been introduced to the traditional pipeline that is composed of threshold detection and clustering method. First, the divide-and-conquer method is employed to utilize electrode spatial information to achieve pre-clustering. Second, the clustering method HDBSCAN (hierarchical density-based spatial clustering of applications with noise) is used to classify spikes and detect overlapping events (multiple spikes firing simultaneously). Third, the template merging method is used to merge redundant exported templates according to the template similarity and the spatial distribution of electrodes. Finally, the template matching method is used to resolve overlapping events. Our approach is validated on simulation data constructed by ourselves and publicly available data and compared to other state-of-the-art spike sorters. We found that the proposed HTsort has a more favorable trade-off between accuracy and time consumption. Compared with MountainSort and SpykingCircus, the time consumption is reduced by at least 40% when the number of electrodes is 64 and below. Compared with HerdingSpikes, the classification accuracy can typically improve by more than 10%. Meanwhile, HTsort exhibits stronger robustness against background noise than other sorters. Our more sophisticated spike sorter would facilitate neurophysiologists to complete spike sorting more quickly and accurately.
- Published
- 2021
36. Predicting Brain Regions Related to Alzheimer's Disease Based on Global Feature
- Author
-
Qi Wang, Siwei Chen, He Wang, Luzeng Chen, Yongan Sun, and Guiying Yan
- Subjects
Computer science ,Neuroscience (miscellaneous) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,brain structural network ,Machine learning ,computer.software_genre ,Correlation ,Cellular and Molecular Neuroscience ,differential network analysis ,Betweenness centrality ,Outpatient clinic ,Original Research ,global featurescore ,business.industry ,Montreal Cognitive Assessment ,Graph theory ,Pattern recognition ,Alzheimer's disease ,diffusion tensor imaging ,Graph ,2hop-connectivity ,Feature (computer vision) ,Graph (abstract data type) ,Artificial intelligence ,Canonical correlation ,Centrality ,business ,computer ,Diffusion MRI ,Neuroscience ,RC321-571 - Abstract
Alzheimer's disease (AD) is a neurodegenerative disease that commonly affects the elderly; early diagnosis and timely treatment are very important to delay the course of the disease. In the past, most brain regions related to AD were identified based on imaging methods, and only some atrophic brain regions could be identified. In this work, the authors used mathematical models to identify the potential brain regions related to AD. In this study, 20 patients with AD and 13 healthy controls (non-AD) were recruited by the neurology outpatient department or the neurology ward of Peking University First Hospital from September 2017 to March 2019. First, diffusion tensor imaging (DTI) was used to construct the brain structural network. Next, the authors set a new local feature index 2hop-connectivity to measure the correlation between different regions. Compared with the traditional graph theory index, 2hop-connectivity exploits the higher-order information of the graph structure. And for this purpose, the authors proposed a novel algorithm called 2hopRWR to measure 2hop-connectivity. Then, a new index global feature score (GFS) based on a global feature was proposed by combing five local features, namely degree centrality, betweenness centrality, closeness centrality, the number of maximal cliques, and 2hop-connectivity, to judge which brain regions are related to AD. As a result, the top ten brain regions identified using the GFS scoring difference between the AD and the non-AD groups were associated to AD by literature verification. The results of the literature validation comparing GFS with the local features showed that GFS was superior to individual local features. Finally, the results of the canonical correlation analysis showed that the GFS was significantly correlated with the scores of the Mini-Mental State Examination (MMSE) scale and the Montreal Cognitive Assessment (MoCA) scale. Therefore, the authors believe the GFS can also be used as a new biomarker to assist in diagnosis and objective monitoring of disease progression. Besides, the method proposed in this paper can be used as a differential network analysis method for network analysis in other domains.
- Published
- 2021
37. Collective Dynamics of Neural Networks With Sleep-Related Biological Drives in Drosophila
- Author
-
Shuihan Qiu, Kaijia Sun, and Zengru Di
- Subjects
0301 basic medicine ,genetic structures ,Neuroscience (miscellaneous) ,Neurosciences. Biological psychiatry. Neuropsychiatry ,LFP ,Local field potential ,Wake ,coupled neural network ,Synchronization ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Bursting ,0302 clinical medicine ,Wavelet ,network structure ,Original Research ,Physics ,Artificial neural network ,Electrophysiology ,030104 developmental biology ,nervous system ,Spectrogram ,duration of sleep and wake ,Biological system ,synchronization ,030217 neurology & neurosurgery ,RC321-571 ,Neuroscience - Abstract
The collective electrophysiological dynamics of the brain as a result of sleep-related biological drives in Drosophila are investigated in this paper. Based on the Huber-Braun thermoreceptor model, the conductance-based neurons model is extended to a coupled neural network to analyze the local field potential (LFP). The LFP is calculated by using two different metrics: the mean value and the distance-dependent LFP. The distribution of neurons around the electrodes is assumed to have a circular or grid distribution on a two-dimensional plane. Regardless of which method is used, qualitatively similar results are obtained that are roughly consistent with the experimental data. During wake, the LFP has an irregular or a regular spike. However, the LFP becomes regular bursting during sleep. To further analyze the results, wavelet analysis and raster plots are used to examine how the LFP frequencies changed. The synchronization of neurons under different network structures is also studied. The results demonstrate that there are obvious oscillations at approximately 8 Hz during sleep that are absent during wake. Different time series of the LFP can be obtained under different network structures and the density of the network will also affect the magnitude of the potential. As the number of coupled neurons increases, the neural network becomes easier to synchronize, but the sleep and wake time described by the LFP spectrogram do not change. Moreover, the parameters that affect the durations of sleep and wake are analyzed.
- Published
- 2021
38. Discrete Dynamics of Dynamic Neural Fields
- Author
-
Eddy Kwessi
- Subjects
dynamic neural fields ,Discretization ,Computer science ,Stability (learning theory) ,Neuroscience (miscellaneous) ,neurons ,Neurosciences. Biological psychiatry. Neuropsychiatry ,01 natural sciences ,Field (computer science) ,010104 statistics & probability ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Statistical physics ,0101 mathematics ,Original Research ,Partial differential equation ,Artificial neural network ,Mathematical model ,Quantitative Biology::Neurons and Cognition ,Neuroinformatics ,stability ,discrete ,simulations ,030217 neurology & neurosurgery ,RC321-571 ,Network analysis ,Neuroscience - Abstract
Large and small cortexes of the brain are known to contain vast amounts of neurons that interact with one another. They thus form a continuum of active neural networks whose dynamics are yet to be fully understood. One way to model these activities is to use dynamic neural fields which are mathematical models that approximately describe the behavior of these congregations of neurons. These models have been used in neuroinformatics, neuroscience, robotics, and network analysis to understand not only brain functions or brain diseases, but also learning and brain plasticity. In their theoretical forms, they are given as ordinary or partial differential equations with or without diffusion. Many of their mathematical properties are still under-studied. In this paper, we propose to analyze discrete versions dynamic neural fields based on nearly exact discretization schemes techniques. In particular, we will discuss conditions for the stability of nontrivial solutions of these models, based on various types of kernels and corresponding parameters. Monte Carlo simulations are given for illustration.
- Published
- 2021
39. CRBA: A Competitive Rate-Based Algorithm Based on Competitive Spiking Neural Networks
- Author
-
Krzysztof J. Cios, Sebastián Ventura, and Paolo Gabriel Cachi
- Subjects
Spiking neural network ,Computer science ,Competitive learning ,Neuroscience (miscellaneous) ,unsupervised image classification ,Dot product ,Neurosciences. Biological psychiatry. Neuropsychiatry ,competitive learning ,MNIST ,rate-based algorithm ,Cellular and Molecular Neuroscience ,Ranking ,Expectation–maximization algorithm ,Spike (software development) ,competitive spiking neural networks ,Algorithm ,MNIST database ,RC321-571 ,Neuroscience ,Original Research - Abstract
In this paper we present a Competitive Rate-Based Algorithm (CRBA) that approximates operation of a Competitive Spiking Neural Network (CSNN). CRBA is based on modeling of the competition between neurons during a sample presentation, which can be reduced to ranking of the neurons based on a dot product operation and the use of a discrete Expectation Maximization algorithm; the latter is equivalent to the spike time-dependent plasticity rule. CRBA's performance is compared with that of CSNN on the MNIST and Fashion-MNIST datasets. The results show that CRBA performs on par with CSNN, while using three orders of magnitude less computational time. Importantly, we show that the weights and firing thresholds learned by CRBA can be used to initialize CSNN's parameters that results in its much more efficient operation.
- Published
- 2021
- Full Text
- View/download PDF
40. Computational Efficiency of a Modular Reservoir Network for Image Recognition
- Author
-
Yifan Dai, Hideaki Yamamoto, Masao Sakuraba, and Shigeo Sato
- Subjects
Computational complexity theory ,Computer science ,Liquid state machine ,pattern recognition ,Neuroscience (miscellaneous) ,Reservoir computing ,robustness ,reservoir computing ,Hough transform ,law.invention ,lcsh:RC321-571 ,Cellular and Molecular Neuroscience ,liquid state machine ,Robustness (computer science) ,law ,Digital image processing ,Network performance ,Algorithm ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,MNIST database ,Neuroscience ,Original Research - Abstract
Liquid state machine (LSM) is a type of recurrent spiking network with a strong relationship to neurophysiology and has achieved great success in time series processing. However, the computational cost of simulations and complex dynamics with time dependency limit the size and functionality of LSMs. This paper presents a large-scale bioinspired LSM with modular topology. We integrate the findings on the visual cortex that specifically designed input synapses can fit the activation of the real cortex and perform the Hough transform, a feature extraction algorithm used in digital image processing, without additional cost. We experimentally verify that such a combination can significantly improve the network functionality. The network performance is evaluated using the MNIST dataset where the image data are encoded into spiking series by Poisson coding. We show that the proposed structure can not only significantly reduce the computational complexity but also achieve higher performance compared to the structure of previous reported networks of a similar size. We also show that the proposed structure has better robustness against system damage than the small-world and random structures. We believe that the proposed computationally efficient method can greatly contribute to future applications of reservoir computing.
- Published
- 2021
41. BCNNM: A Framework for in silico Neural Tissue Development Modeling
- Author
-
Dmitrii V. Bozhko, Georgii K. Galumov, Aleksandr I. Polovian, Sofiia M. Kolchanova, Vladislav O. Myrov, Viktoriia A. Stelmakh, Helgi B. Schiöth, Neuroscience Center, Helsinki Institute of Life Science HiLIFE, and University of Helsinki
- Subjects
0301 basic medicine ,Computer science ,Cell- och molekylärbiologi ,In silico ,Neuroscience (miscellaneous) ,Synaptogenesis ,neuronal connectivity ,3124 Neurology and psychiatry ,brain organoid ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Cellular neural network ,medicine ,Axon ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Artificial neural network ,axon guidance ,Neurogenesis ,3112 Neurosciences ,Neurosciences ,simulation ,neurogenesis ,030104 developmental biology ,medicine.anatomical_structure ,Axon guidance ,Stem cell ,tissue development ,Neuroscience ,Neurovetenskaper ,Cell and Molecular Biology ,030217 neurology & neurosurgery - Abstract
Cerebral (“brain”) organoids are high-fidelity in vitro cellular models of the developing brain, which makes them one of the go-to methods to study isolated processes of tissue organization and its electrophysiological properties, allowing to collect invaluable data for in silico modeling neurodevelopmental processes. Complex computer models of biological systems supplement in vivo and in vitro experimentation and allow researchers to look at things that no laboratory study has access to, due to either technological or ethical limitations. In this paper, we present the Biological Cellular Neural Network Modeling (BCNNM) framework designed for building dynamic spatial models of neural tissue organization and basic stimulus dynamics. The BCNNM uses a convenient predicate description of sequences of biochemical reactions and can be used to run complex models of multi-layer neural network formation from a single initial stem cell. It involves processes such as proliferation of precursor cells and their differentiation into mature cell types, cell migration, axon and dendritic tree formation, axon pathfinding and synaptogenesis. The experiment described in this article demonstrates a creation of an in silico cerebral organoid-like structure, constituted of up to 1 million cells, which differentiate and self-organize into an interconnected system with four layers, where the spatial arrangement of layers and cells are consistent with the values of analogous parameters obtained from research on living tissues. Our in silico organoid contains axons and millions of synapses within and between the layers, and it comprises neurons with high density of connections (more than 10). In sum, the BCNNM is an easy-to-use and powerful framework for simulations of neural tissue development that provides a convenient way to design a variety of tractable in silico experiments.
- Published
- 2021
42. ASD-SAENet: A Sparse Autoencoder, and Deep-Neural Network Model for Detecting Autism Spectrum Disorder (ASD) Using fMRI Data
- Author
-
Fahad Saeed and Fahad Almuqhim
- Subjects
ABIDE ,Computer science ,diagnosis ,Neuroscience (miscellaneous) ,ASD ,deep-learning ,lcsh:RC321-571 ,Cellular and Molecular Neuroscience ,Neurodevelopmental disorder ,Neuroimaging ,Classifier (linguistics) ,medicine ,Generalizability theory ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,autoencoder ,Artificial neural network ,business.industry ,Deep learning ,fMRI ,Pattern recognition ,medicine.disease ,Autoencoder ,sparse autoencoder ,classification ,Autism spectrum disorder ,Artificial intelligence ,business ,Neuroscience - Abstract
Autism spectrum disorder (ASD) is a heterogenous neurodevelopmental disorder which is characterized by impaired communication, and limited social interactions. The shortcomings of current clinical approaches which are based exclusively on behavioral observation of symptomology, and poor understanding of the neurological mechanisms underlying ASD necessitates the identification of new biomarkers that can aid in study of brain development, and functioning, and can lead to accurate and early detection of ASD. In this paper, we developed a deep-learning model called ASD-SAENet for classifying patients with ASD from typical control subjects using fMRI data. We designed and implemented a sparse autoencoder (SAE) which results in optimized extraction of features that can be used for classification. These features are then fed into a deep neural network (DNN) which results in superior classification of fMRI brain scans more prone to ASD. Our proposed model is trained to optimize the classifier while improving extracted features based on both reconstructed data error and the classifier error. We evaluated our proposed deep-learning model using publicly available Autism Brain Imaging Data Exchange (ABIDE) dataset collected from 17 different research centers, and include more than 1,035 subjects. Our extensive experimentation demonstrate that ASD-SAENet exhibits comparable accuracy (70.8%), and superior specificity (79.1%) for the whole dataset as compared to other methods. Further, our experiments demonstrate superior results as compared to other state-of-the-art methods on 12 out of the 17 imaging centers exhibiting superior generalizability across different data acquisition sites and protocols. The implemented code is available on GitHub portal of our lab at: https://github.com/pcdslab/ASD-SAENet.
- Published
- 2021
43. A Connectome-Based, Corticothalamic Model of State- and Stimulation-Dependent Modulation of Rhythmic Neural Activity and Connectivity
- Author
-
John D. Griffiths, Anthony Randal McIntosh, and Jeremie Lefebvre
- Subjects
alpha rhythm ,Brain activity and meditation ,Neuroscience (miscellaneous) ,brain stimulation ,Sensory system ,Stimulation ,Electroencephalography ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Rhythm ,medicine ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,neural mass and field models ,Original Research ,030304 developmental biology ,MEG magnetoencephalography ,Physics ,0303 health sciences ,medicine.diagnostic_test ,connectome ,Disinhibition ,Brain stimulation ,Connectome ,medicine.symptom ,EEG electroencephalography ,Neuroscience ,030217 neurology & neurosurgery - Abstract
Rhythmic activity in the brain fluctuates with behaviour and cognitive state, through a combination of coexisting and interacting frequencies. At large spatial scales such as those studied in human M/EEG, measured oscillatory dynamics are believed to arise primarily from a combination of cortical (intracolumnar) and corticothalamic rhythmogenic mechanisms. Whilst considerable progress has been made in characterizing these two types of neural circuit separately, relatively little work has been done that attempts to unify them into a single consistent picture. This is the aim of the present paper. We present and examine a whole-brain, connectome-based neural mass model with detailed long-range cortico-cortical connectivity and strong, recurrent corticothalamic circuitry. This system reproduces a variety of known features of human M/EEG recordings, including spectral peaks at canonical frequencies, and functional connectivity structure that is shaped by the underlying anatomical connectivity. Importantly, our model is able to capture state- (e.g., idling/active) dependent fluctuations in oscillatory activity and the coexistence of multiple oscillatory phenomena, as well as frequency-specific modulation of functional connectivity. We find that increasing the level of sensory drive to the thalamus triggers a suppression of the dominant low frequency rhythms generated by corticothalamic loops, and subsequent disinhibition of higher frequency endogenous rhythmic behaviour of intracolumnar microcircuits. These combine to yield simultaneous decreases in lower frequency and increases in higher frequency components of the M/EEG power spectrum during states of high sensory or cognitive drive. Building on this, we also explored the effect of pulsatile brain stimulation on ongoing oscillatory activity, and evaluated the impact of coexistent frequencies and state-dependent fluctuations on the response of cortical networks. Our results provide new insight into the role played by cortical and corticothalamic circuits in shaping intrinsic brain rhythms, and suggest new directions for brain stimulation therapies aimed at state-and frequency-specific control of oscillatory brain activity.
- Published
- 2020
44. Learning Generative State Space Models for Active Inference
- Author
-
Samuel Wauthier, Cedric De Boom, Bart Dhoedt, Tim Verbelen, and Ozan Çatal
- Subjects
0301 basic medicine ,Technology and Engineering ,Computer science ,media_common.quotation_subject ,Neuroscience (miscellaneous) ,Inference ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,active inference ,State space ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,Free energy principle ,media_common ,robotics ,Artificial neural network ,business.industry ,generative modeling ,Deep learning ,deep learning ,Ambiguity ,free energy ,Generative model ,030104 developmental biology ,Artificial intelligence ,FREE-ENERGY PRINCIPLE ,business ,030217 neurology & neurosurgery ,Generative grammar ,Neuroscience - Abstract
In this paper we investigate the active inference framework as a means to enable autonomous behavior in artificial agents. Active inference is a theoretical framework underpinning the way organisms act and observe in the real world. In active inference, agents act in order to minimize their so called free energy, or prediction error. Besides being biologically plausible, active inference has been shown to solve hard exploration problems in various simulated environments. However, these simulations typically require handcrafting a generative model for the agent. Therefore we propose to use recent advances in deep artificial neural networks to learn generative state space models from scratch, using only observation-action sequences. This way we are able to scale active inference to new and challenging problem domains, whilst still building on the theoretical backing of the free energy principle. We validate our approach on the mountain car problem to illustrate that our learnt models can indeed trade-off instrumental value and ambiguity. Furthermore, we show that generative models can also be learnt using high-dimensional pixel observations, both in the OpenAI Gym car racing environment and a real-world robotic navigation task. Finally we show that active inference based policies are an order of magnitude more sample efficient than Deep Q Networks on RL tasks.
- Published
- 2020
- Full Text
- View/download PDF
45. Fano Factor: A Potentially Useful Information
- Author
-
Kamil Rajdl, Petr Lansky, and Lubomir Kostal
- Subjects
0301 basic medicine ,Fano factor ,Computer science ,Spike train ,Neuroscience (miscellaneous) ,Measure (mathematics) ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Time windows ,Renewal theory ,Statistical physics ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,spike trains ,Markov chain ,Quantitative Biology::Neurons and Cognition ,renewal process ,030104 developmental biology ,Spike (software development) ,intensity ,030217 neurology & neurosurgery ,Neuroscience ,variability measure - Abstract
The Fano factor, defined as the variance-to-mean ratio of spike counts in a time window, is often used to measure the variability of neuronal spike trains. However, despite its transparent definition, careless use of the Fano factor can easily lead to distorted or even wrong results. One of the problems is the unclear dependence of the Fano factor on the spiking rate, which is often neglected or handled insufficiently. In this paper we aim to explore this problem in more detail and to study the possible solution, which is to evaluate the Fano factor in the operational time. We use equilibrium renewal and Markov renewal processes as spike train models to describe the method in detail, and we provide an illustration on experimental data.
- Published
- 2020
46. Spiking Correlation Analysis of Synchronous Spikes Evoked by Acupuncture Mechanical Stimulus
- Author
-
Yanqiu Che, Ya Jiao Liu, Jiang Wang, Chunxiao Han, Ying Mei Qin, Bo Nan Shan, and Qing Qin
- Subjects
0301 basic medicine ,Dorsum ,state-space model ,Computer science ,Neuroscience (miscellaneous) ,Sensory system ,Stimulus (physiology) ,lcsh:RC321-571 ,Correlation ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Acupuncture ,log-linear model ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,spike correlation ,bayesian theory ,synchrony ,population signals ,030104 developmental biology ,Correlation analysis ,Neuroscience ,030217 neurology & neurosurgery ,acupuncture - Abstract
Acupuncturing the ST36 acupoint can evoke the response of the sensory nervous system, which is translated into output electrical signals in the spinal dorsal root. Neural response activities, especially synchronous spike events, evoked by different acupuncture manipulations have remarkable differences. In order to identify these network collaborative activities, we analyze the underlying spike correlation in the synchronous spike event. In this paper, we adopt a log-linear model to describe network response activities evoked by different acupuncture manipulations. Then the state-space model and Bayesian theory are used to estimate network spike correlations. Two sets of simulation data are used to test the effectiveness of the estimation algorithm and the model goodness-of-fit. In addition, simulation data are also used to analyze the relationship between spike correlations and synchronous spike events. Finally, we use this method to identify network spike correlations evoked by four different acupuncture manipulations. Results show that reinforcing manipulations (twirling reinforcing and lifting-thrusting reinforcing) can evoke the third-order spike correlation but reducing manipulations (twirling reducing and lifting-thrusting reducing) does not. This is the main reason why synchronous spikes evoked by reinforcing manipulations are more abundant than reducing manipulations.
- Published
- 2020
47. Unsupervised Few-Shot Feature Learning via Self-Supervised Training
- Author
-
Xiaolong Zou, Zilong Ji, Tiejun Huang, and Si Wu
- Subjects
0301 basic medicine ,Computer science ,media_common.quotation_subject ,Neuroscience (miscellaneous) ,Machine learning ,computer.software_genre ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,Generalization (learning) ,Feature (machine learning) ,unsupervised ,Quality (business) ,few-shot learning ,Cluster analysis ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,media_common ,Original Research ,business.industry ,pseudo labels ,Cognition ,Construct (python library) ,030104 developmental biology ,Unsupervised learning ,Artificial intelligence ,business ,computer ,Feature learning ,030217 neurology & neurosurgery ,episodic learning ,Neuroscience ,clustering - Abstract
Learning from limited exemplars (few-shot learning) is a fundamental, unsolved problem that has been laboriously explored in the machine learning community. However, current few-shot learners are mostly supervised and rely heavily on a large amount of labeled examples. Unsupervised learning is a more natural procedure for cognitive mammals and has produced promising results in many machine learning tasks. In this paper, we propose an unsupervised feature learning method for few-shot learning. The proposed model consists of two alternate processes, progressive clustering and episodic training. The former generates pseudo-labeled training examples for constructing episodic tasks; and the later trains the few-shot learner using the generated episodic tasks which further optimizes the feature representations of data. The two processes facilitate each other, and eventually produce a high quality few-shot learner. In our experiments, our model achieves good generalization performance in a variety of downstream few-shot learning tasks on Omniglot and MiniImageNet. We also construct a new few-shot person re-identification dataset FS-Market1501 to demonstrate the feasibility of our model to a real-world application.
- Published
- 2020
48. Proto-Object Based Saliency Model With Texture Detection Channel
- Author
-
Ralph Etienne-Cummings, Takeshi Uejima, and Ernst Niebur
- Subjects
0301 basic medicine ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Neuroscience (miscellaneous) ,Texture (music) ,lcsh:RC321-571 ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,proto-object ,image texture analysis ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Original Research ,Computational neuroscience ,saliency ,business.industry ,Object based ,Information processing ,Pattern recognition ,Filter (signal processing) ,Benchmarking ,030104 developmental biology ,visual attention ,Neuromorphic engineering ,Artificial intelligence ,neuromorphic engineering ,business ,030217 neurology & neurosurgery ,Neuroscience ,Communication channel - Abstract
The amount of visual information projected from the retina to the brain exceeds the information processing capacity of the latter. Attention, therefore, functions as a filter to highlight important information at multiple stages of the visual pathway that requires further and more detailed analysis. Among other functions, this determines where to fixate since only the fovea allows for high resolution imaging. Visual saliency modeling, i.e. understanding how the brain selects important information to analyze further and to determine where to fixate next, is an important research topic in computational neuroscience and computer vision. Most existing bottom-up saliency models use low-level features such as intensity and color, while some models employ high-level features, like faces. However, little consideration has been given to mid-level features, such as texture, for visual saliency models. In this paper, we extend a biologically plausible proto-object based saliency model by adding simple texture channels which employ nonlinear operations that mimic the processing performed by primate visual cortex. The extended model shows statistically significant improved performance in predicting human fixations compared to the previous model. Comparing the performance of our model with others on publicly available benchmarking datasets, we find that our biologically plausible model matches the performance of other models, even though those were designed entirely for maximal performance with little regard to biological realism.
- Published
- 2020
49. Predicting Grating Orientations With Cross-Frequency Coupling and Least Absolute Shrinkage and Selection Operator in V1 and V4 of Rhesus Monkeys
- Author
-
Zhaohui Li, Yue Du, Youben Xiao, and Liyong Yin
- Subjects
Property (programming) ,Neuroscience (miscellaneous) ,cross-frequency coupling ,Local field potential ,LASSO ,Grating ,orientation selectivity ,lcsh:RC321-571 ,Cellular and Molecular Neuroscience ,Lasso (statistics) ,medicine ,visual cortex ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Mathematics ,Shrinkage ,Original Research ,Coupling ,local field potential ,Orientation (computer vision) ,business.industry ,Pattern recognition ,Visual cortex ,medicine.anatomical_structure ,Artificial intelligence ,business ,Neuroscience - Abstract
Orientation selectivity, as an emergent property of neurons in the visual cortex, is of critical importance in the processing of visual information. Characterizing the orientation selectivity based on neuronal firing activities or local field potentials (LFPs) is a hot topic of current research. In this paper, we used cross-frequency coupling and least absolute shrinkage and selection operator (LASSO) to predict the grating orientations in V1 and V4 of two rhesus monkeys. The experimental data were recorded by utilizing two chronically implanted multi-electrode arrays, which were placed, respectively, in V1 and V4 of two rhesus monkeys performing a selective visual attention task. The phase–amplitude coupling (PAC) and amplitude–amplitude coupling (AAC) were employed to characterize the cross-frequency coupling of LFPs under sinusoidal grating stimuli with different orientations. Then, a LASSO logistic regression model was constructed to predict the grating orientation based on the strength of PAC and AAC. Moreover, the cross-validation method was used to evaluate the performance of the model. It was found that the average accuracy of the prediction based on the combination of PAC and AAC was 73.9%, which was higher than the predicting accuracy with PAC or AAC separately. In conclusion, a LASSO logistic regression model was introduced in this study, which can predict the grating orientations with relatively high accuracy by using PAC and AAC together. Our results suggest that the principle behind the LASSO model is probably an alternative direction to explore the mechanism for generating orientation selectivity.
- Published
- 2020
50. A Computational Model of the Cholinergic Modulation of CA1 Pyramidal Cell Activity
- Author
-
Jean-Marie C. Bouteiller, Adam Mergenthal, Gene J. Yu, and Theodore W. Berger
- Subjects
0301 basic medicine ,hippocampus ,muscarinic ,Neuroscience (miscellaneous) ,chemistry.chemical_element ,Calcium ,Inhibitory postsynaptic potential ,CA1 ,Calcium in biology ,lcsh:RC321-571 ,03 medical and health sciences ,chemistry.chemical_compound ,Cellular and Molecular Neuroscience ,0302 clinical medicine ,medicine ,Neurotransmitter ,pyramidal ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Chemistry ,Endoplasmic reticulum ,acetylcholine ,030104 developmental biology ,Excitatory postsynaptic potential ,compartmental model ,Neuroscience ,030217 neurology & neurosurgery ,Acetylcholine ,Intracellular ,medicine.drug - Abstract
Dysfunction in cholinergic modulation has been linked to a variety of cognitive disorders including Alzheimer's disease. The important role of this neurotransmitter has been explored in a variety of experiments, yet many questions remain unanswered about the contribution of cholinergic modulation to healthy hippocampal function. To address this question, we have developed a model of CA1 pyramidal neuron that takes into consideration muscarinic receptor activation in response to changes in extracellular concentration of acetylcholine and its effects on cellular excitability and downstream intracellular calcium dynamics. This model incorporates a variety of molecular agents to accurately simulate several processes heretofore ignored in computational modeling of CA1 pyramidal neurons. These processes include the inhibition of ionic channels by phospholipid depletion along with the release of calcium from intracellular stores (i.e., the endoplasmic reticulum). This paper describes the model and the methods used to calibrate its behavior to match experimental results. The result of this work is a compartmental model with calibrated mechanisms for simulating the intracellular calcium dynamics of CA1 pyramidal cells with a focus on those related to release from calcium stores in the endoplasmic reticulum. From this model we also make various predictions for how the inhibitory and excitatory responses to cholinergic modulation vary with agonist concentration. This model expands the capabilities of CA1 pyramidal cell models through the explicit modeling of molecular interactions involved in healthy cognitive function and disease. Through this expanded model we come closer to simulating these diseases and gaining the knowledge required to develop novel treatments.
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.