59 results on '"Neuroscience: Neural Modelling"'
Search Results
2. Neurocognitive Informatics Manifesto.
- Author
-
Duch, Wlodzislaw, Wang, H-F, Neace, M.B, Zhu, Y, and Duch, Wlodzislaw
- Subjects
Neuroscience: Computational Neuroscience ,Neuroscience: Neurolinguistics ,Neuroscience: Neural Modelling ,Computational Neuroscience ,Neurolinguistics ,Neural Modelling - Abstract
Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given.
- Published
- 2009
3. Processing of analogy in the thalamocortical circuit
- Author
-
Choe, Yoonsuck, Wunsch, Don, and Hasselmo, Michael
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Neuroscience: Neurophysiology ,Computer Science: Neural Nets ,Computer Science: Artificial Intelligence ,Psychology: Perceptual Cognitive Psychology ,Neural Modelling ,Computational Neuroscience ,Neurophysiology ,Neural Nets ,Artificial Intelligence ,Perceptual Cognitive Psychology - Abstract
The corticothalamic feedback and the thalamic reticular nucleus have gained much attention lately because of their integrative and modulatory functions. A previous study by the author suggested that this circuitry can process analogies (i.e., the {\em analogy hypothesis}). In this paper, the proposed model was implemented as a network of leaky integrate-and-fire neurons to test the {\em analogy hypothesis}. The previous proposal required specific delay and temporal dynamics, and the implemented network tuned accordingly functioned as predicted. Furthermore, these specific conditions turn out to be consistent with experimental data, suggesting that a further investigation of the thalamocortical circuit within the {\em analogical framework} may be worthwhile.
- Published
- 2003
4. Processing of analogy in the thalamocortical circuit
- Author
-
Choe, Yoonsuck and Wunsch, Don
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Neuroscience: Neurophysiology ,Computer Science: Neural Nets ,Computer Science: Artificial Intelligence ,Psychology: Perceptual Cognitive Psychology ,Neural Modelling ,Computational Neuroscience ,Neurophysiology ,Neural Nets ,Artificial Intelligence ,Perceptual Cognitive Psychology - Abstract
The corticothalamic feedback and the thalamic reticular nucleus have gained much attention lately because of their integrative and modulatory functions. A previous study by the author suggested that this circuitry can process analogies (i.e., the {\em analogy hypothesis}). In this paper, the proposed model was implemented as a network of leaky integrate-and-fire neurons to test the {\em analogy hypothesis}. The previous proposal required specific delay and temporal dynamics, and the implemented network tuned accordingly functioned as predicted. Furthermore, these specific conditions turn out to be consistent with experimental data, suggesting that a further investigation of the thalamocortical circuit within the {\em analogical framework} may be worthwhile.
- Published
- 2003
5. Applying Slow Feature Analysis to Image Sequences Yields a Rich Repertoire of Complex Cell Properties
- Author
-
Berkes, Pietro, Wiskott, Laurenz, and Dorronsoro, José R.
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Computer Science: Machine Vision ,Biology: Theoretical Biology ,Neural Modelling ,Computational Neuroscience ,Machine Vision ,Theoretical Biology - Abstract
We apply Slow Feature Analysis (SFA) to image sequences generated from natural images using a range of spatial transformations. An analysis of the resulting receptive fields shows that they have a rich spectrum of invariances and share many properties with complex and hypercomplex cells of the primary visual cortex. Furthermore, the dependence of the solutions on the statistics of the transformations is investigated.
- Published
- 2002
6. Second order isomorphism: A reinterpretation and its implications in brain and cognitive sciences
- Author
-
Choe, Yoonsuck, Gray, Wayne D., and Schunn, Christian D.
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Psychology: Cognitive Psychology ,Computer Science: Neural Nets ,Neural Modelling ,Computational Neuroscience ,Cognitive Psychology ,Neural Nets - Abstract
Shepard and Chipman's second order isomorphism describes how the brain may represent the relations in the world. However, a common interpretation of the theory can cause difficulties. The problem originates from the static nature of representations. In an alternative interpretation, I propose that we assign an active role to the internal representations and relations. It turns out that a collection of such active units can perform analogical tasks. The new interpretation is supported by the existence of neural circuits that may be implementing such a function. Within this framework, perception, cognition, and motor function can be understood under a unifying principle of analogy.
- Published
- 2002
7. Learning in the Cerebellum with Sparse Conjunctions and Linear Separator Algorithms
- Author
-
Harris, Harlan, Reichler, Jesse, Marko, Kenneth, and Werbos, Paul
- Subjects
Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Computational Neuroscience ,Neural Modelling - Abstract
This paper investigates potential learning rules in the cerebellum. We review evidence that input to the cerebellum is sparsely expanded by granule cells into a very wide basis vector, and that Purkinje cells learn to compute a linear separation using that basis. We review learning rules employed by existing cerebellar models, and show that recent results from Computational Learning Theory suggest that the standard delta rule would not be efficient. We suggest that alternative, attribute-efficient learning rules, such as Winnow or Incremental Delta-Bar-Delta, are more appropriate for cerebellar modeling, and support this position with results from a computational model.
- Published
- 2001
8. RatCog: A GUI maze simulation tool with plugin "rat brains."
- Author
-
Prince, C. G., Talton, J., Berkeley, I. S. N., and Gunay, C.
- Subjects
Biology: Animal Behavior ,Biology: Animal Cognition ,Neuroscience: Computational Neuroscience ,Psychology: Comparative Psychology ,Computer Science: Artificial Intelligence ,Computer Science: Neural Nets ,Neuroscience: Neural Modelling ,Animal Behavior ,Animal Cognition ,Computational Neuroscience ,Comparative Psychology ,Artificial Intelligence ,Neural Nets ,Neural Modelling - Abstract
We have implemented RatCog, a Graphical User Interface (GUI) radial-maze simulation tool providing various computational models of rats. Rat models are loaded as runtime plugin files, and an Application Programming Interface (API) enables additional plugins to be created. One implemented plugin is a back-propagation trained connectionist model. GUI features include maze graphics and performance statistics. The GUI makes it easier to use these computional models, while the plugins make the models widely available.
- Published
- 2000
9. Categorical Ontology of Complex Systems, Meta-Systems and Theory of Levels: The Emergence of Life, Human Consciousness and Society
- Author
-
Baianu, Prof. Dr. I.C., Glazebrook, Prof. Dr. James F., Iamtovics, Dr. Barna, R·adoiu, Dr. D., and Dehmer, Dr. M.
- Subjects
Biology: Theoretical Biology ,Neuroscience: Computational Neuroscience ,Computer Science: Artificial Intelligence ,Computer Science: Complexity Theory ,Computer Science: Dynamical Systems ,Neuroscience: Neurogenetics ,Neuroscience: Neuroanatomy ,Neuroscience: Neurochemistry ,Neuroscience: Neural Modelling ,Theoretical Biology ,Computational Neuroscience ,Artificial Intelligence ,Complexity Theory ,Dynamical Systems ,Neurogenetics ,Neuroanatomy ,Neurochemistry ,Neural Modelling - Abstract
Single cell interactomics in simpler organisms, as well as somatic cell interactomics in multicellular organisms, involve biomolecular interactions in complex signalling pathways that were recently represented in modular terms by quantum automata with ‘reversible behavior’ representing normal cell cycling and division. Other implications of such quantum automata, modular modeling of signaling pathways and cell differentiation during development are in the fields of neural plasticity and brain development leading to quantum-weave dynamic patterns and specific molecular processes underlying extensive memory, learning, anticipation mechanisms and the emergence of human consciousness during the early brain development in children. Cell interactomics is here represented for the first time as a mixture of ‘classical’ states that determine molecular dynamics subject to Boltzmann statistics and ‘steady-state’, metabolic (multi-stable) manifolds, together with ‘configuration’ spaces of metastable quantum states emerging from complex quantum dynamics of interacting networks of biomolecules, such as proteins and nucleic acids that are now collectively defined as quantum interactomics. On the other hand, the time dependent evolution over several generations of cancer cells --that are generally known to undergo frequent and extensive genetic mutations and, indeed, suffer genomic transformations at the chromosome level (such as extensive chromosomal aberrations found in many colon cancers)-- cannot be correctly represented in the ‘standard’ terms of quantum automaton modules, as the normal somatic cells can. This significant difference at the cancer cell genomic level is therefore reflected in major changes in cancer cell interactomics often from one cancer cell ‘cycle’ to the next, and thus it requires substantial changes in the modeling strategies, mathematical tools and experimental designs aimed at understanding cancer mechanisms. Novel solutions to this important problem in carcinogenesis are proposed and experimental validation procedures are suggested. From a medical research and clinical standpoint, this approach has important consequences for addressing and preventing the development of cancer resistance to medical therapy in ongoing clinical trials involving stage III cancer patients, as well as improving the designs of future clinical trials for cancer treatments. KEYWORDS: Emergence of Life and Human Consciousness; Proteomics; Artificial Intelligence; Complex Systems Dynamics; Quantum Automata models and Quantum Interactomics; quantum-weave dynamic patterns underlying human consciousness; specific molecular processes underlying extensive memory, learning, anticipation mechanisms and human consciousness; emergence of human consciousness during the early brain development in children; Cancer cell ‘cycling’; interacting networks of proteins and nucleic acids; genetic mutations and chromosomal aberrations in cancers, such as colon cancer; development of cancer resistance to therapy; ongoing clinical trials involving stage III cancer patients’ possible improvements of the designs for future clinical trials and cancer treatments.
- Published
- 2010
10. Łukasiewicz-Moisil Many-Valued Logic Algebra of Highly-Complex Systems
- Author
-
Baianu, Professor I.C., Georgescu, Professor George, Glazebrook, Professor James F., and Iantovics, Dr. Barna
- Subjects
Biology: Theoretical Biology ,Neuroscience: Computational Neuroscience ,Computer Science: Complexity Theory ,Computer Science: Dynamical Systems ,Computer Science: Neural Nets ,Neuroscience: Neurogenetics ,Neuroscience: Neural Modelling ,Neuroscience: Neurophysiology ,Theoretical Biology ,Computational Neuroscience ,Complexity Theory ,Dynamical Systems ,Neural Nets ,Neurogenetics ,Neural Modelling ,Neurophysiology - Abstract
A novel approach to self-organizing, highly-complex systems (HCS), such as living organisms and artificial intelligent systems (AIs), is presented which is relevant to Cognition, Medical Bioinformatics and Computational Neuroscience. Quantum Automata (QAs) were defined in our previous work as generalized, probabilistic automata with quantum state spaces (Baianu, 1971). Their next-state functions operate through transitions between quantum states defined by the quantum equations of motion in the Schroedinger representation, with both initial and boundary conditions in space-time. Such quantum automata operate with a quantum logic, or Q-logic, significantly different from either Boolean or Łukasiewicz many-valued logic. A new theorem is proposed which states that the category of quantum automata and automata--homomorphisms has both limits and colimits. Therefore, both categories of quantum automata and classical automata (sequential machines) are bicomplete. A second new theorem establishes that the standard automata category is a subcategory of the quantum automata category. The quantum automata category has a faithful representation in the category of Generalized (M,R)--Systems which are open, dynamic biosystem networks with defined biological relations that represent physiological functions of primordial organisms, single cells and higher organisms.
- Published
- 2010
11. How Consciousness Emerges from Ions
- Author
-
Liu, Mr. Peilei and Wang, Professor Ting
- Subjects
Neuroscience: Computational Neuroscience ,Computer Science: Artificial Intelligence ,Computer Science: Statistical Models ,Neuroscience: Neural Modelling ,Computational Neuroscience ,Artificial Intelligence ,Statistical Models ,Neural Modelling - Abstract
As Francis Crick said, neuroscience is a data rich but theory poor field, and it is missing a broad framework as in physics. We wish to put forward such a unified framework based on existing evidences. Unexpectedly, it is a very simple statistical model. Specifically, we find that neural mechanisms in the spatial and temporal dimensionalities follow similar statistical laws. And they are usually called neural coding and memory respectively. Moreover, memory can be divided into two types: long-term and short-term (or instantaneous). The instantaneous memory is the foundation of consciousness according to Crick. Then we indicate the physical and biological mechanisms behind these statistical laws. In general, they actually reflect random processes of particles such as ions. Detailed model and supporting evidences can be found in our previous work. And this simple model is really powerful in explaining most psychological phenomenon and advanced intelligence such as language.
- Published
- 2014
12. Neural Mechanism of Language
- Author
-
Liu, Dr. Peilei and Wang, Professor Ting
- Subjects
Neuroscience: Behavioral Neuroscience ,Biology: Cognitive Archeology ,Neuroscience: Computational Neuroscience ,Computer Science: Language ,Neuroscience: Neurolinguistics ,Neuroscience: Neural Modelling ,Behavioral Neuroscience ,Cognitive Archeology ,Computational Neuroscience ,Language ,Neurolinguistics ,Neural Modelling - Abstract
This paper is based on our previous work on neural coding. It is a self-organized model supported by existing evidences. Firstly, we briefly introduce this model in this paper, and then we explain the neural mechanism of language and reasoning with it. Moreover, we find that the position of an area determines its importance. Specifically, language relevant areas are in the capital position of the cortical kingdom. Therefore they are closely related with autonomous consciousness and working memories. In essence, language is a miniature of the real world. Briefly, this paper would like to bridge the gap between molecule mechanism of neurons and advanced functions such as language and reasoning.
- Published
- 2014
13. Motor Learning Mechanism on the Neuron Scale
- Author
-
Liu, Mr. Peilei and Wang, Prof. Ting
- Subjects
Neuroscience: Behavioral Neuroscience ,Biology: Animal Behavior ,Neuroscience: Biophysics ,Biology: Cognitive Archeology ,Neuroscience: Computational Neuroscience ,Computer Science: Artificial Intelligence ,Computer Science: Dynamical Systems ,Computer Science: Machine Learning ,Neuroscience: Neural Modelling ,Behavioral Neuroscience ,Animal Behavior ,Biophysics ,Cognitive Archeology ,Computational Neuroscience ,Artificial Intelligence ,Dynamical Systems ,Machine Learning ,Neural Modelling - Abstract
Based on existing data, we wish to put forward a biological model of motor system on the neuron scale. Then we indicate its implications in statistics and learning. Specifically, neuron’s firing frequency and synaptic strength are probability estimates in essence. And the lateral inhibition also has statistical implications. From the standpoint of learning, dendritic competition through retrograde messengers is the foundation of conditional reflex and “grandmother cell” coding. And they are the kernel mechanisms of motor learning and sensory-motor integration respectively. Finally, we compare motor system with sensory system. In short, we would like to bridge the gap between molecule evidences and computational models.
- Published
- 2014
14. A Quantitative Neural Coding Model of Sensory Memory
- Author
-
Liu, PHD Peilei and Wang, Professor Ting
- Subjects
Psychology: Cognitive Psychology ,Neuroscience: Computational Neuroscience ,Computer Science: Dynamical Systems ,Computer Science: Machine Learning ,Computer Science: Neural Nets ,Computer Science: Statistical Models ,Neuroscience: Neural Modelling ,Philosophy: Logic ,Philosophy: Philosophy of Mind ,Cognitive Psychology ,Computational Neuroscience ,Dynamical Systems ,Machine Learning ,Neural Nets ,Statistical Models ,Neural Modelling ,Logic ,Philosophy of Mind - Abstract
The coding mechanism of sensory memory on the neuron scale is one of the most important questions in neuroscience. We have put forward a quantitative neural network model, which is self-organized, self-similar, and self-adaptive, just like an ecosystem following Darwin's theory. According to this model, neural coding is a “mult-to-one”mapping from objects to neurons. And the whole cerebrum is a real-time statistical Turing Machine, with powerful representing and learning ability. This model can reconcile some important disputations, such as: temporal coding versus rate-based coding, grandmother cell versus population coding, and decay theory versus interference theory. And it has also provided explanations for some key questions such as memory consolidation, episodic memory, consciousness, and sentiment. Philosophical significance is indicated at last.
- Published
- 2014
15. A Unified Quantitative Model of Vision and Audition
- Author
-
Liu, Mr. Peilei and Wang, Professor Ting
- Subjects
Neuroscience: Computational Neuroscience ,Computer Science: Machine Vision ,Computer Science: Statistical Models ,Neuroscience: Neural Modelling ,Computational Neuroscience ,Machine Vision ,Statistical Models ,Neural Modelling - Abstract
We have put forwards a unified quantitative framework of vision and audition, based on existing data and theories. According to this model, the retina is a feedforward network self-adaptive to inputs in a specific period. After fully grown, cells become specialized detectors based on statistics of stimulus history. This model has provided explanations for perception mechanisms of colour, shape, depth and motion. Moreover, based on this ground we have put forwards a bold conjecture that single ear can detect sound’s direction. This is complementary to existing theories and has provided better explanations for sound localization.
- Published
- 2014
16. Pattern-Generator-Driven Development in Self-Organizing Models
- Author
-
Bednar, James A., Miikkulainen, Risto, and Bower, James M.
- Subjects
Neuroscience: Computational Neuroscience ,Computer Science: Artificial Intelligence ,Computer Science: Complexity Theory ,Computer Science: Machine Learning ,Computer Science: Neural Nets ,Psychology: Developmental Psychology ,Neuroscience: Neural Modelling ,Computational Neuroscience ,Artificial Intelligence ,Complexity Theory ,Machine Learning ,Neural Nets ,Developmental Psychology ,Neural Modelling - Abstract
Self-organizing models develop realistic cortical structures when given approximations of the visual environment as input. Recently it has been proposed that internally generated input patterns, such as those found in the developing retina and in PGO waves during REM sleep, may have the same effect. Internal pattern generators would constitute an efficient way to specify, develop, and maintain functionally appropriate perceptual organization. They may help express complex structures from minimal genetic information, and retain this genetic structure within a highly plastic system. Simulations with the RF-LISSOM orientation map model indicate that such preorganization is possible, providing a computational framework for examining how genetic influences interact with visual experience.
- Published
- 1998
17. Mixing Memory and Desire: Want and Will in Neural Modeling
- Author
-
MacLennan, Bruce J. and Pribram, Karl H.
- Subjects
Neuroscience: Behavioral Neuroscience ,Biology: Animal Cognition ,Biology: Theoretical Biology ,Psychology: Cognitive Psychology ,Neuroscience: Computational Neuroscience ,Computer Science: Artificial Intelligence ,Computer Science: Neural Nets ,Computer Science: Robotics ,Neuroscience: Neural Modelling ,Philosophy: Epistemology ,Behavioral Neuroscience ,Animal Cognition ,Theoretical Biology ,Cognitive Psychology ,Computational Neuroscience ,Artificial Intelligence ,Neural Nets ,Robotics ,Neural Modelling ,Epistemology - Abstract
Values are critical for intelligent behavior, since values determine interests, and interests determine relevance. Therefore we address relevance and its role in intelligent behavior in animals and machines. Animals avoid exhaustive enumeration of possibilities by focusing on relevant aspects of the environment, which emerge into the (cognitive) foreground, while suppressing irrelevant aspects, which submerge into the background. Nevertheless, the background is not invisible, and aspects of it can pop into the foreground if background processing deems them potentially relevant. Essential to these ideas are questions of how contexts are switched, which defines cognitive/behavioral episodes, and how new contexts are created, which allows the efficiency of foreground/background processing to be extended to new behaviors and cognitive domains. Next we consider mathematical characterizations of the foreground/background distinction, which we treat as a dynamic separation of the concrete space into (approximately) orthogonal subspaces, which are processed differently. Background processing is characterized by large receptive fields which project into a space of relatively low dimension to accomplish rough categorization of a novel stimulus and its approximate location. Such background processing is partly innate and partly learned, and we discuss possible correlational (Hebbian) learning mechanisms. Foreground processing is characterized by small receptive fields which project into a space of comparatively high dimension to accomplish precise categorization and localization of the stimuli relevant to the context. We also consider mathematical models of valences and affordances, which are an aspect of the foreground. Cells processing foregound information have no fixed meaning (i.e., their meaning is contextual), so it is necessary to explain how the processing accomplished by foreground neurons can be made relative to the context. Thus we consider the properties of several simple mathematical models of how the contextual representation controls foreground processing. We show how simple correlational processes accomplish the contextual separation of foreground from background on the basis of differential reinforcement. That is, these processes account for the contextual separation of the concrete space into disjoint subspaces corresponding to the foreground and background. Since an episode may comprise the activation of several contexts (at varying levels of activity) we consider models, suggested by quantum mechanics, of foreground processing in superposition. That is, the contextual state may be a weighted superposition of several pure contexts, with a corresponding superposition of the foreground representations and the processes operating on them. This leads us to a consideration of the nature and origin of contexts. Although some contexts are innate, many are learned. We discuss a mathematical model of contexts which allows a context to split into several contexts, agglutinate from several contexts, or to constellate out of relatively acontextual processing. Finally, we consider the acontextual processing which occurs when the current context is no longer relevant, and may trigger the switch to another context or the formation of a new context. We relate this to the situation known as "breakdown" in phenomenology.
- Published
- 1998
18. Predictive Coding as a Model of Biased Competition in Visual Attention
- Author
-
Spratling, Michael W
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Psychology: Perceptual Cognitive Psychology ,Computer Science: Neural Nets ,Neural Modelling ,Computational Neuroscience ,Perceptual Cognitive Psychology ,Neural Nets - Abstract
Attention acts, through cortical feedback pathways, to enhance the response of cells encoding expected or predicted information. Such observations are inconsistent with the predictive coding theory of cortical function which proposes that feedback acts to suppress information predicted by higher-level cortical regions. Despite this discrepancy, this article demonstrates that the predictive coding model can be used to simulate a number of the effects of attention. This is achieved via a simple mathematical rearrangement of the predictive coding model, which allows it to be interpreted as a form of biased competition model. Nonlinear extensions to the model are proposed that enable it to explain a wider range of data.
- Published
- 2008
19. Reconciling Predictive Coding and Biased Competition Models of Cortical Function
- Author
-
Spratling, Michael W
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Psychology: Perceptual Cognitive Psychology ,Computer Science: Neural Nets ,Neural Modelling ,Computational Neuroscience ,Perceptual Cognitive Psychology ,Neural Nets - Abstract
A simple variation of the standard biased competition model is shown, via some trivial mathematical manipulations, to be identical to predictive coding. Specifically, it is shown that a particular implementation of the biased competition model, in which nodes compete via inhibition that targets the inputs to a cortical region, is mathematically equivalent to the linear predictive coding model. This observation demonstrates that these two important and influential rival theories of cortical function are minor variations on the same underlying mathematical model.
- Published
- 2008
20. Corner detection in color images by multiscale combination of end-stopped cortical cells.
- Author
-
Würtz, R.P., Lourens, T., Germond, A., Hasler, M., and Nicoud, J.
- Subjects
Biology: Animal Cognition ,Neuroscience: Computational Neuroscience ,Computer Science: Artificial Intelligence ,Computer Science: Machine Vision ,Neuroscience: Neural Modelling ,Animal Cognition ,Computational Neuroscience ,Artificial Intelligence ,Machine Vision ,Neural Modelling - Abstract
We present a corner-detection algorithm based on a model for end-stopping cells in the visual cortex. Shortcomings of this model are overcome by a combination over several scales. The notion of an end-stopped cell and the resulting corner detector is generalized to color channels in a biologically plausible way. The resulting corner detection method yields good results in the presence of high frequency texture, noise, varying contrast, and rounded corners. This compares favorably with known corner detectors.
- Published
- 1997
21. How training and testing histories affect generalization: a test of simple neural networks
- Author
-
Ghirlanda, Stefano and Enquist, Magnus
- Subjects
Biology: Animal Cognition ,Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Biology: Ethology ,Biology: Animal Behavior ,Psychology: Comparative Psychology ,Animal Cognition ,Computational Neuroscience ,Neural Modelling ,Ethology ,Animal Behavior ,Comparative Psychology - Abstract
We show that a simple network model of associative learning can reproduce three findings that arise from particular training and testing procedures in generalization experiments: the effect of 1) ``errorless learning'' and 2) extinction testing on peak shift, and 3) the central tendency effect. These findings provide a true test of the network model, which was developed to account for other penhomena, and highlight the potential of neural networks to study phenomena that depend on sequences of experiences with many stimuli. Our results suggest that at least some such phenomena, e.g., stimulus range effects, may derive from basic mechanisms of associative memory rather than from more complex memory processes.
- Published
- 2007
22. Binding - a proposed experiment and a model
- Author
-
Triesch, Jochen, von der Malsburg, Christoph, von der Malsburg, Christoph, von Seelen, Werner, Vorbrueggen, Jan C., and Sendhoff, Bernhard
- Subjects
Neuroscience: Behavioral Neuroscience ,Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Psychology: Psychophysics ,Behavioral Neuroscience ,Computational Neuroscience ,Neural Modelling ,Psychophysics - Abstract
The binding problem is regarded as one of today's key questions about brain function. Several solutions have been proposed, yet the issue is still controversial. The goal of this article is twofold. Firstly, we propose a new experimental paradigm requiring feature binding, the delayed binding response task. Secondly, we propose a binding mechanism employing fast reversible synaptic plasticity to express the binding between concepts. We discuss the experimental predictions of our model for the delayed binding response task.
- Published
- 1996
23. The Missing Link between Morphemic Assemblies and Behavioral Responses:a Bayesian Information-Theoretical model of lexical processing
- Author
-
Moscoso del Prado Martin, Dr Fermin, Kostic, Prof Aleksandar, and Filipovic-Djurdjevic, Dusica
- Subjects
Neuroscience: Neurolinguistics ,Computer Science: Statistical Models ,Computer Science: Language ,Neuroscience: Neural Modelling ,Linguistics: Computational Linguistics ,Neuroscience: Computational Neuroscience ,Linguistics: Semantics ,Linguistics: Morphology ,Computer Science: Machine Learning ,Psychology: Psycholinguistics ,Psychology: Cognitive Psychology ,Computer Science: Neural Nets ,Computer Science: Artificial Intelligence ,Neurolinguistics ,Statistical Models ,Language ,Neural Modelling ,Computational Linguistics ,Computational Neuroscience ,Semantics ,Morphology ,Machine Learning ,Psycholinguistics ,Cognitive Psychology ,Neural Nets ,Artificial Intelligence - Abstract
We present the Bayesian Information-Theoretical (BIT) model of lexical processing: A mathematical model illustrating a novel approach to the modelling of language processes. The model shows how a neurophysiological theory of lexical processing relying on Hebbian association and neural assemblies can directly account for a variety of effects previously observed in behavioural experiments. We develop two information-theoretical measures of the distribution of usages of a morpheme or word, and use them to predict responses in three visual lexical decision datasets investigating inflectional morphology and polysemy. Our model offers a neurophysiological basis for the effects of morpho-semantic neighbourhoods. These results demonstrate how distributed patterns of activation naturally result in the arisal of symbolic structures. We conclude by arguing that the modelling framework exemplified here, is a powerful tool for integrating behavioural and neurophysiological results.
- Published
- 2006
24. The Missing Link between Morphemic Assemblies and Behavioral Responses:a Bayesian Information-Theoretical model of lexical processing
- Author
-
Moscoso del Prado Martin, Fermin, Aleksandar, Kostic, and Dusica, Filipovic-Djurdjevic
- Subjects
Neuroscience: Neurolinguistics ,Computer Science: Statistical Models ,Computer Science: Language ,Neuroscience: Neural Modelling ,Linguistics: Computational Linguistics ,Neuroscience: Computational Neuroscience ,Linguistics: Semantics ,Linguistics: Morphology ,Computer Science: Machine Learning ,Psychology: Psycholinguistics ,Psychology: Cognitive Psychology ,Computer Science: Neural Nets ,Computer Science: Artificial Intelligence ,Neurolinguistics ,Statistical Models ,Language ,Neural Modelling ,Computational Linguistics ,Computational Neuroscience ,Semantics ,Morphology ,Machine Learning ,Psycholinguistics ,Cognitive Psychology ,Neural Nets ,Artificial Intelligence - Abstract
We present the Bayesian Information-Theoretical (BIT) model of lexical processing: A mathematical model illustrating a novel approach to the modelling of language processes. The model shows how a neurophysiological theory of lexical processing relying on Hebbian association and neural assemblies can directly account for a variety of eects previously observed in behavioral experiments. We develop two information-theoretical measures of the distribution of usages of a word or morpheme. These measures are calculated through unsupervised means from corpora. We show that our measures succesfully predict responses in three visual lexical decision datasets investigating the processing of in ectional morphology in Serbian and English languages, and the eects of polysemy and homonymy in English. We discuss how our model provides a neurophysiological grounding for the facilitatory and inhibitory eects of dierent types of lexical neighborhoods. In addition, our results show how, under a model based on neural assemblies, distributed patterns of activation naturally result in the arisal of discrete symbol-like structures. Therefore, the BIT model oers a point of reconciliation in the debate between distributed connectionist and discrete localist models. Finally, we argue that the modelling framework exemplied by the BIT model, is a powerful tool for integrating the different levels of the description of the human language processing system.
- Published
- 2006
25. Symbols are not uniquely human
- Author
-
Ribeiro, Sidarta, Loula, Angelo, Araújo, Ivan, Gudwin, Ricardo, and Queiroz, Joao
- Subjects
Computer Science: Language ,Biology: Ethology ,Neuroscience: Computational Neuroscience ,Neuroscience: Behavioral Neuroscience ,Biology: Theoretical Biology ,Biology: Animal Behavior ,Neuroscience: Neurolinguistics ,Philosophy: Philosophy of Language ,Neuroscience: Neural Modelling ,Biology: Animal Cognition ,Psychology: Comparative Psychology ,Computer Science: Artificial Intelligence ,Linguistics: Learnability ,Philosophy: Epistemology ,Language ,Ethology ,Computational Neuroscience ,Behavioral Neuroscience ,Theoretical Biology ,Animal Behavior ,Neurolinguistics ,Philosophy of Language ,Neural Modelling ,Animal Cognition ,Comparative Psychology ,Artificial Intelligence ,Learnability ,Epistemology - Abstract
Modern semiotics is a branch of logics that formally defines symbol-based communication. In recent years, the semiotic classification of signs has been invoked to support the notion that symbols are uniquely human. Here we show that alarm-calls such as those used by African vervet monkeys (Cercopithecus aethiops), logically satisfy the semiotic definition of symbol. We also show that the acquisition of vocal symbols in vervet monkeys can be successfully simulated by a computer program based on minimal semiotic and neurobiological constraints. The simulations indicate that learning depends on the tutor-predator ratio, and that apprentice-generated auditory mistakes in vocal symbol interpretation have little effect on the learning rates of apprentices (up to 80% of mistakes are tolerated). In contrast, just 10% of apprentice-generated visual mistakes in predator identification will prevent any vocal symbol to be correctly associated with a predator call in a stable manner. Tutor unreliability was also deleterious to vocal symbol learning: a mere 5% of “lying” tutors were able to completely disrupt symbol learning, invariably leading to the acquisition of incorrect associations by apprentices. Our investigation corroborates the existence of vocal symbols in a non-human species, and indicates that symbolic competence emerges spontaneously from classical associative learning mechanisms when the conditioned stimuli are self-generated, arbitrary and socially efficacious. We propose that more exclusive properties of human language, such as syntax, may derive from the evolution of higher-order domains for neural association, more removed from both the sensory input and the motor output, able to support the gradual complexification of grammatical categories into syntax.
- Published
- 2006
26. A feedback model of visual attention
- Author
-
Spratling, M W and Johnson, M H
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Computer Science: Neural Nets ,Psychology: Perceptual Cognitive Psychology ,Neural Modelling ,Computational Neuroscience ,Neural Nets ,Perceptual Cognitive Psychology - Abstract
Feedback connections are a prominent feature of cortical anatomy and are likely to have significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research.
- Published
- 2004
27. Exploring the functional significance of dendritic inhibition in cortical pyramidal cells
- Author
-
Spratling, M W and Johnson, M H
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Computer Science: Neural Nets ,Neural Modelling ,Computational Neuroscience ,Neural Nets - Abstract
Inhibitory synapses contacting the soma and axon initial segment are commonly presumed to participate in shaping the response properties of cortical pyramidal cells. Such an inhibitory mechanism has been explored in numerous computational models. However, the majority of inhibitory synapses target the dendrites of pyramidal cells, and recent physiological data suggests that this dendritic inhibition affects tuning properties. We describe a model that can be used to investigate the role of dendritic inhibition in the competition between neurons. With this model we demonstrate that dendritic inhibition significantly enhances the computational and representational properties of neural networks.
- Published
- 2003
28. Adaptation in the Corticothalamic Loop: Computational Prospects of Tuning the Senses
- Author
-
Hillenbrand, Dr. Ulrich and van Hemmen, Prof. Dr. J. Leo
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Biology: Theoretical Biology ,Neural Modelling ,Computational Neuroscience ,Theoretical Biology - Abstract
The present article discusses computational hypotheses on corticothalamic feedback and modulation of cortical response properties. We have recently proposed the two phenomena to be related, hypothesizing that neuronal velocity preference in the visual cortex is altered by feedback to the lateral geniculate nucleus. We now contrast the common view that response adaptation to stimuli subserves a function of redundancy reduction with the idea that it may enhance cortical representation of objects. Our arguments lead to the concept that the corticothalamic loop is involved in reducing sensory input to behaviorally relevant aspects, a pre-attentive gating.
- Published
- 2002
29. Cortical region interactions and the functional role of apical dendrites
- Author
-
Spratling, M. W.
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Computer Science: Neural Nets ,Neural Modelling ,Computational Neuroscience ,Neural Nets - Abstract
The basal and distal apical dendrites of pyramidal cells occupy distinct cortical layers and are targeted by axons originating in different cortical regions. Hence, apical and basal dendrites receive information from distinct sources. Physiological evidence suggests that this anatomically observed segregation of input sources may have functional significance. This possibility has been explored in various connectionist models that employ neurons with functionally distinct apical and basal compartments. A neuron in which separate sets of inputs can be integrated independently has the potential to operate in a variety of ways which are not possible for the conventional model of a neuron in which all inputs are treated equally. This article thus considers how functionally distinct apical and basal dendrites can contribute to the information processing capacities of single neurons and, in particular, how information from different cortical regions could have disparate affects on neural activity and learning.
- Published
- 2002
30. Pre-integration lateral inhibition enhances unsupervised learning
- Author
-
Spratling, M. W. and Johnson, M. H.
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Computer Science: Neural Nets ,Neural Modelling ,Computational Neuroscience ,Neural Nets - Abstract
A large and influential class of neural network architectures use post-integration lateral inhibition as a mechanism for competition. We argue that these algorithms are computationally deficient in that they fail to generate, or learn, appropriate perceptual representations under certain circumstances. An alternative neural network architecture is presented in which nodes compete for the right to receive inputs rather than for the right to generate outputs. This form of competition, implemented through pre-integration lateral inhibition, does provide appropriate coding properties and can be used to efficiently learn such representations. Furthermore, this architecture is consistent with both neuro-anatomical and neuro-physiological data. We thus argue that pre-integration lateral inhibition has computational advantages over conventional neural network architectures while remaining equally biologically plausible.
- Published
- 2002
31. Dopaminergic Regulation of Neuronal Circuits in Prefrontal Cortex
- Author
-
Scheler, Gabriele
- Subjects
Psychology: Cognitive Psychology ,Neuroscience: Computational Neuroscience ,Computer Science: Neural Nets ,Neuroscience: Neural Modelling ,Neuroscience: Neuropsychology ,Neuroscience: Neuropsychiatry ,Cognitive Psychology ,Computational Neuroscience ,Neural Nets ,Neural Modelling ,Neuropsychology ,Neuropsychiatry - Abstract
Neuromodulators, like dopamine, have considerable influence on the processing capabilities of neural networks. This has for instance been shown in the working memory functions of prefrontal cortex, which may be regulated by altering the dopamine level. Experimental work provides evidence on the biochemical and electrophysiological actions of dopamine receptors, but there are few theories concerning their significance for computational properties (ServanPrintzCohen90,Hasselmo94). We point to experimental data on neuromodulatory regulation of temporal properties of excitatory neurons and depolarization of inhibitory neurons, and suggest computational models employing these effects. Changes in membrane potential may be modelled by the firing threshold, and temporal properties by a parameterization of neuronal responsiveness according to the preceding spike interval. We apply these concepts to two examples using spiking neural networks. In the first case, there is a change in the input synchronization of neuronal groups, which leads to changes in the formation of synchronized neuronal ensembles. In the second case, the threshold of interneurons influences lateral inhibition, and the switch from a winner-take-all network to a parallel feedforward mode of processing. Both concepts are interesting for the modeling of cognitive functions and may have explanatory power for behavioral changes associated with dopamine regulation.
- Published
- 2001
32. Does Corticothalamic Feedback Control Cortical Velocity Tuning?
- Author
-
Hillenbrand, Dr. Ulrich and van Hemmen, Prof. Dr. J. Leo
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Biology: Theoretical Biology ,Neural Modelling ,Computational Neuroscience ,Theoretical Biology - Abstract
The thalamus is the major gate to the cortex and its contribution to cortical receptive field properties is well established. Cortical feedback to the thalamus is, in turn, the anatomically dominant input to relay cells, yet its influence on thalamic processing has been difficult to interpret. For an understanding of complex sensory processing, detailed concepts of the corticothalamic interplay need yet to be established. To study corticogeniculate processing in a model, we draw on various physiological and anatomical data concerning the intrinsic dynamics of geniculate relay neurons, the cortical influence on relay modes, lagged and nonlagged neurons, and the structure of visual cortical receptive fields. In extensive computer simulations we elaborate the novel hypothesis that the visual cortex controls via feedback the temporal response properties of geniculate relay cells in a way that alters the tuning of cortical cells for speed.
- Published
- 2001
33. Dendritic inhibition enhances neural coding properties.
- Author
-
Spratling, M. W and Johnson, M. H.
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Computer Science: Neural Nets ,Neural Modelling ,Computational Neuroscience ,Neural Nets - Abstract
The presence of a large number of inhibitory contacts at the soma and axon initial segment of cortical pyramidal cells has inspired a large and influential class of neural network model which use post-integration lateral inhibition as a mechanism for competition between nodes. However, inhibitory synapses also target the dendrites of pyramidal cells. The role of this dendritic inhibition in competition between neurons has not previously been addressed. We demonstrate, using a simple computational model, that such pre-integration lateral inhibition provides networks of neurons with useful representational and computational properties which are not provided by post-integration inhibition.
- Published
- 2001
34. Tilt Aftereffects in a Self-Organizing Model of the Primary Visual Cortex
- Author
-
Bednar, James A. and Miikkulainen, Risto
- Subjects
Neuroscience: Computational Neuroscience ,Computer Science: Neural Nets ,Neuroscience: Neural Modelling ,Psychology: Psychophysics ,Computational Neuroscience ,Neural Nets ,Neural Modelling ,Psychophysics - Abstract
RF-LISSOM, a self-organizing model of laterally connected orientation maps in the primary visual cortex, was used to study the psychological phenomenon known as the tilt aftereffect. The same self-organizing processes that are responsible for the long-term development of the map are shown to result in tilt aftereffects over short time scales in the adult. The model permits simultaneous observation of large numbers of neurons and connections, making it possible to relate high-level phenomena to low-level events, which is difficult to do experimentally. The results give detailed computational support for the long-standing conjecture that the direct tilt aftereffect arises from adaptive lateral interactions between feature detectors. They also make a new prediction that the indirect effect results from the normalization of synaptic efficacies during this process. The model thus provides a unified computational explanation of self-organization and both the direct and indirect tilt aftereffect in the primary visual cortex.
- Published
- 2000
35. Spatiotemporal adaptation through corticothalamic loops: A hypothesis
- Author
-
Hillenbrand, Dr. Ulrich and van Hemmen, Prof. Dr. J. Leo
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Biology: Theoretical Biology ,Neural Modelling ,Computational Neuroscience ,Theoretical Biology - Abstract
The thalamus is the major gate to the cortex and its control over cortical responses is well established. Cortical feedback to the thalamus is, in turn, the anatomically dominant input to relay cells, yet its influence on thalamic processing has been difficult to interpret. For an understanding of complex sensory processing, detailed concepts of the corticothalamic interplay need yet to be established. Drawing on various physiological and anatomical data, we elaborate the novel hypothesis that the visual cortex controls the spatiotemporal structure of cortical receptive fields via feedback to the lateral geniculate nucleus. Furthermore, we present and analyze a model of corticogeniculate loops that implements this control, and exhibit its ability of object segmentation by statistical motion analysis in the visual field.
- Published
- 2000
36. The What and Why of Binding: The Modeler's Perspective
- Author
-
von der Malsburg, Christoph
- Subjects
Neuroscience: Computational Neuroscience ,Computer Science: Dynamical Systems ,Computer Science: Neural Nets ,Neuroscience: Neural Modelling ,Philosophy: Philosophy of Mind ,Computational Neuroscience ,Dynamical Systems ,Neural Nets ,Neural Modelling ,Philosophy of Mind - Abstract
In attempts to formulate a computational understanding of brain function, one of the fundamental concerns is the data structure by which the brain represents information. For many decades, a conceptual framework has dominated the thinking of both brain modelers and neurobiologists. That framework is referred to here as "classical neural networks." It is well supported by experimental data, although it may be incomplete. A characterization of this framework will be offered in the next section. Difficulties in modeling important functional aspects of the brain on the basis of classical neural networks alone have led to the recognition that another, general mechanism must be invoked to explain brain function. That mechanism I call "binding." Binding by neural signal synchrony had been mentioned several times in the liter ature (Lege´ndy, 1970; Milner, 1974) before it was fully formulated as a general phenomenon (von der Malsburg, 1981). Although experimental evidence for neural syn chrony was soon found, the idea was largely ignored for many years. Only recently has it become a topic of animated discussion. In what follows, I will summarize the nature and the roots of the idea of binding, especially of temporal binding, and will discuss some of the objec tions raised against it.
- Published
- 1999
37. The role of terminators and occlusion cues in motion integration and segmentation: a neural network model
- Author
-
Liden, Lars H. and Pack, Christopher C.
- Subjects
Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Computational Neuroscience ,Neural Modelling - Abstract
The perceptual interaction of terminators and occlusion cues with the functional processes of motion integration and segmentation is examined using a computational model. Inte-gration is necessary to overcome noise and the inherent ambiguity in locally measured motion direction (the aperture problem). Segmentation is required to detect the presence of motion discontinuities and to prevent spurious integration of motion signals between objects with different trajectories. Terminators are used for motion disambiguation, while occlusion cues are used to suppress motion noise at points where objects intersect. The model illustrates how competitive and cooperative interactions among cells carrying out these functions can account for a number of perceptual effects, including the chopsticks illusion and the occluded diamond illusion. Possible links to the neurophysiology of the middle temporal visual area (MT) are suggested.
- Published
- 1999
38. Correlations and the encoding of information in the nervous system
- Author
-
Panzeri, S., Schultz, S. R., Treves, A., and Rolls, E. T.
- Subjects
Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Neuroscience: Neurophysiology ,Computational Neuroscience ,Neural Modelling ,Neurophysiology - Abstract
Is the information transmitted by an ensemble of neurons determined solely by the number of spikes fired by each cell, or do correlations in the emission of action potentials also play a significant role? We derive a simple formula which enables this question to be answered rigorously for short timescales. The formula quantifies the corrections to the instantaneous information rate which result from correlations in spike emission between pairs of neurons. The mutual information that the ensemble of neurons conveys about external stimuli can thus be broken down into firing rate and correlation components. This analysis provides fundamental constraints upon the nature of information coding - showing that over short timescales, correlations cannot dominate information representation, that stimulus-independent correlations may lead to synergy (where the neurons together convey more information than they would considered independently), but that only certain combinations of the different sources of correlation result in significant synergy rather than in redundancy or in negligible effects. This analysis leads to a new quantification procedure which is directly applicable to simultaneous multiple neuron recordings.
- Published
- 1999
39. From Neurons to Brain: Adaptive Self-Wiring of Neurons
- Author
-
Segev, Ronen and Ben-Jacob, Eshel
- Subjects
Neuroscience: Biophysics ,Biology: Theoretical Biology ,Neuroscience: Computational Neuroscience ,Computer Science: Dynamical Systems ,Neuroscience: Neuroanatomy ,Neuroscience: Neural Modelling ,Neuroscience: Neurophysiology ,Biophysics ,Theoretical Biology ,Computational Neuroscience ,Dynamical Systems ,Neuroanatomy ,Neural Modelling ,Neurophysiology - Abstract
During embryonic morpho-genesis, a collection of individual neurons turns into a functioning network with unique capabilities. Only recently has this most staggering example of emergent process in the natural world, began to be studied. Here we propose a navigational strategy for neurites growth cones, based on sophisticated chemical signaling. We further propose that the embryonic environment (the neurons and the glia cells) acts as an excitable media in which concentric and spiral chemical waves are formed. Together with the navigation strategy, the chemical waves provide a mechanism for communication, regulation, and control required for the adaptive self-wiring of neurons.
- Published
- 1998
40. Learning attractors in an asynchronous, stochastic electronic neural network
- Author
-
Del Giudice, P., Fusi, S., Badoni, D., Dante, V., and Amit, Daniel J.
- Subjects
Neuroscience: Computational Neuroscience ,Computer Science: Machine Learning ,Neuroscience: Neural Modelling ,Computational Neuroscience ,Machine Learning ,Neural Modelling - Abstract
LANN27 is an electronic device implementing in discrete electronics a fully connected (full feedback) network of 27 neurons and 351 plastic synapses with stochastic Hebbian learning. Both neurons and synapses are dynamic elements, with two time constants - fast for neurons and slow for synapses. Learning, synaptic dynamics, is analogue and is driven in a Hebbian way by neural activities. Long-term memorization takes place on a discrete set of synaptic efficacies and is effected in a stochastic manner. The intense feedback between the nonlinear neural elements, via the learned synaptic structure, creates in an organic way a set of attractors for the collective retrieval dynamics of the neural system, akin to Hebbian learned reverberations. The resulting structure of the attractors is a record of the large-scale statistics in the uncontrolled, incoming flow of stimuli. As the statistics in the stimulus flow changes significantly, the attractors slowly follow it and the network behaves as a palimpsest - old is gradually replaced by new. Moreover, the slow learning creates attractors which render the network a prototype extractor: entire clouds of stimuli, noisy versions of a prototype, used in training, all retrieve the attractor corresponding to the prototype upon retrieval. Here we describe the process of studying the collective dynamics of the network, before, during and following learning, which is rendered complex by the richness of the possible stimulus streams and the large dimensionality of the space of states of the network. We propose sampling techniques and modes of representation for the outcome.
- Published
- 1998
41. SIMULATION IN NEUROBIOLOGY -- THEORY OR EXPERIMENT?
- Author
-
Amit, Daniel J.
- Subjects
Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Neuroscience: Neurophysiology ,Computational Neuroscience ,Neural Modelling ,Neurophysiology - Abstract
Investigation in neurophysiology usually involves measurements of large population average signals or small sample recordings. There is an underlying assumption that the observations express activity of large groups of similarly acting neurons that is the result of a bottom-up scenario in which individual cells, via their synaptic interactions, lead to the large scale phenomena. The connection between the levels must be provided by theory, which must also provide the relevant variables to observe. It is suggested that between the experiment and the full theory there is a creative, mixed role for simulation: both experimental and theoretical. A simulation presents complex dynamics and hence is an empirical board for testing theoretical tools, yet its controlled behavior can make predictions about the biological system.
- Published
- 1998
42. Towards a unified model of cortical computation II: From control architecture to a model of consciousness
- Author
-
Lorincz, Andras
- Subjects
Computer Science: Dynamical Systems ,Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Computer Science: Machine Learning ,Philosophy: Philosophy of Mind ,Computer Science: Neural Nets ,Computer Science: Artificial Intelligence ,Dynamical Systems ,Neural Modelling ,Computational Neuroscience ,Machine Learning ,Philosophy of Mind ,Neural Nets ,Artificial Intelligence - Abstract
The recently introduced Static and Dynamic State (SDS) Feedback control scheme together with its modified form, the Data Compression and Reconstruction (DCR) architecture that performs pseudoinverse computation, suggests a unified model of cortical processing including consciousness. The constraints of the model are outlined were and the features of the cortical architecture that are suggested and sometimes dictated by these constraints are listed. Constraints are imposed on cortical layers, e.g., (1) the model prescribes a connectivity substructure that is shown to fit the main properties of the `basic neural circuit' of the cerebral cortex (Shepherd and Koch, 1990, Douglas and Martin 1990). In: The synaptic organization of the brain, Oxford University Press, 1990), and (2) the stability requirements of the pseudoinverse method offer an explanation for the columnar organization of the cortex. Constraints are also imposed on the hierarchy of cortical areas, e.g., the proposed control architecture requires computations of the control variables belonging to both the `desired' and the experienced' moves as well as a `sign-proper' separation of feedback channels that fit known properties of the basal ganglia -- thalamocortical loops (Lorincz, 1997). An outline is given as to how the DCR scheme can be extended towards a model for consciousness that can deal with the `homunculus fallacy' by resolving the fallacy and saving he homunculus as an inherited and learnt partially ordered list of preferences.
- Published
- 1997
43. A paradigmatic working memory (attractor) cell in it cortex
- Author
-
Amit, Daniel J., Fusi, Stefano, and Yakovlev, Volodya
- Subjects
Neuroscience: Behavioral Neuroscience ,Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Neuroscience: Neurophysiology ,Behavioral Neuroscience ,Computational Neuroscience ,Neural Modelling ,Neurophysiology - Abstract
We discuss paradigmatic properties of activity of single cells comprising an attractor -- a developed stable delay activity distribution. To demonstrate these properties and a methodology for measuring their values, we present a detailed account of the spike activity recorded from one single cell in infero-temporal (IT) cortex of a monkey performing a delayed match-to-sample (DMS) task of visual images. In particular, we discuss and exemplify: 1. the relation between spontaneous activity and activity immediately preceding the first stimulus in each trial during a series of DMS trials; 2. the effect on the visual response (i.e. activity {\bf during} stimulation) of stimulus degradation (moving in the space of IT afferents); 3. the behaviour of the delay activity (i.e. activity {\bf following} visual stimulation) under stimulus degradation (attractor dynamics, and the basin of attraction); and, 4. the propagation of information between trials -- the vehicle for the formation of (contextual) correlations by learning a fixed stimulus sequence, as found in Miyashita 1988. In the process of the discussion and demonstration, we expose effective tools for the identification and characterisation of attractor dynamics.
- Published
- 1997
44. DYNAMICS OF A RECURRENT NETWORK OF SPIKING NEURONS BEFORE AND FOLLOWING LEARNING
- Author
-
Amit, Daniel J. and Brunel, Nicolas
- Subjects
Neuroscience: Computational Neuroscience ,Computer Science: Statistical Models ,Neuroscience: Neural Modelling ,Neuroscience: Neuropsychology ,Computational Neuroscience ,Statistical Models ,Neural Modelling ,Neuropsychology - Abstract
Extensive simulations of large recurrent networks of integrate-and-fire excitatory and inhibitory neurons in realistic cortical conditions (before and after Hebbian unsupervised learning of uncorrelated stimuli) exhibit a rich phenomenology of stochastic neural spike dynamics, and in particular, coexistence between two types of stable states: spontaneous activity, upon stimulation by an unlearned stimulus; and `working memory' states strongly correlated with learned stimuli. Firing rates have very wide distributions, due to the variability in the connectivity from neuron to neuron. ISI histograms are exponential, except for small intervals. Thus the spike emission processes are well approximated by a Poisson process. The variability of the spike emission process is effectively controlled by the magnitude of the post-spike reset potential relative to the mean depolarization of the cell. Cross-correlations (CC) exhibit a central peak near zero delay, flanked by damped oscillations. The magnitude of the central peak in the CCs depends both on the probability that a spike emitted by a neuron affects another randomly chosen neuron and on firing rates. It increases when average rates decrease. Individual CCs depend very weakly on the synaptic interactions between the pairs of neurons. The dependence of individual CCs on the rates of the pair of neurons is in agreement with experimental data. The distribution of firing rates among neurons is in very good agreement with a simple theory, indicating that correlations between spike emission processes in the network are effectively small.
- Published
- 1997
45. Slowness: An Objective for Spike-Timing-Dependent Plasticity?
- Author
-
Sprekeler, Henning, Michaelis, Christian, and Wiskott, Laurenz
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Biology: Theoretical Biology ,Neural Modelling ,Computational Neuroscience ,Theoretical Biology - Abstract
Slow Feature Analysis (SFA) is an efficient algorithm for learning input-output functions that extract the most slowly varying features from a quickly varying signal. It has been successfully applied to the unsupervised learning of translation-, rotation-, and other invariances in a model of the visual system, to the learning of complex cell receptive fields, and, combined with a sparseness objective, to the self-organized formation of place cells in a model of the hippocampus. In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons.
- Published
- 2006
46. Dynamics of the brain at global and microscopic scales: Neural networks and the EEG
- Author
-
Wright, J.J. and Liley, D.T.L
- Subjects
Neuroscience: Brain Imaging ,Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Brain Imaging ,Computational Neuroscience ,Neural Modelling - Abstract
There is some complementarity of models for the origin of the electroencephalogram (EEG) and neural network models for information storage in brainlike systems. From the EEG models of Freeman, of Nunez, and of the authors' group we argue that the wavelike processes revealed in the EEG exhibit linear and near-equilibrium dynamics at macroscopic scale, despite extremely nonlinear - probably chaotic - dynamics at microscopic scale. Simulations of cortical neuronal interactions at global and microscopic scales are then presented. The simulations depend on anatomical and physiological estimates of synaptic densities, coupling symmetries, synaptic gain, dendritic time constants, and axonal delays. It is shown that the frequency content, wave velocities, frequency/wavenumber spectra and response to cortical activation of the electrocorticogram (ECoG) can be reproduced by a "lumped" simulation treating small cortical areas as single-function units. The corresponding cellular neural network simulation has properties that include those of attractor neural networks proposed by Amit and by Parisi. Within the simulations at both scales, sharp transitions occur between low and high cell firing rates. These transitions may form a basis for neural interactions across scale. To maintain overall cortical dynamics in the normal low firing-rate range, interactions between the cortex and the subcortical systems are required to prevent runaway global excitation. Thus, the interaction of cortex and subcortex via corticostriatal and related pathways may partly regulate global dynamics by a principle analogous to adiabatic control of artificial neural networks.
- Published
- 1996
47. Binding in Models of Perception and Brain Function
- Author
-
von der Malsburg, Christoph
- Subjects
Biology: Animal Cognition ,Biology: Theoretical Biology ,Neuroscience: Computational Neuroscience ,Computer Science: Dynamical Systems ,Neuroscience: Neural Modelling ,Animal Cognition ,Theoretical Biology ,Computational Neuroscience ,Dynamical Systems ,Neural Modelling - Abstract
The development of the issue of binding as fundamental to neural dynamics has made possible recent advances in the modeling of difficult problems of perception and brain function. Among them is perceptual segmentation, invariant pattern recognition and one-shot learning. Also, longer-term conceptual developments that have led to this success are reviewed.
- Published
- 1995
48. Temporal fluctuations in coherence of brain waves
- Author
-
Bullock, T.H., McClune, M.C., Achimowicz, J.Z., Iragui-Madoz, V.J., Duckrow, R.B., and Spencer, S.S.
- Subjects
Neuroscience: Brain Imaging ,Neuroscience: Computational Neuroscience ,Neuroscience: Neural Modelling ,Neuroscience: Neurology ,Brain Imaging ,Computational Neuroscience ,Neural Modelling ,Neurology - Abstract
As a measure of dynamical structure, short term fluctuations of coherencebetween 0.3 and 100 Hz in the electroencephalogram (EEG) of humans were studied from recordings made by chronic subdural macroelectrodes 5-10 mm apart, on temporal, frontal and parietal lobes, and from intracranial probes deep in the temporal lobe, including the hippocampus, during sleep, alert and seizure states. The time series of coherence between adjacent sites calculated every second or less often varies widely in stability over time; sometimes it is stable for half a minute or more. Within two minute samples, coherence commonly fluctuates by a factor up to 2 or 3, in all bands, within the time scale of seconds to tens of seconds. The power spectrum of the time series of these fluctuations is broad, extending to 0.02 Hz or slower, and is weighted toward the slower frequencies; little power is faster than 0.5 Hz. Some records show conspicuous swings with a preferred duration of 5-15 s, either irregularly or quasi-rhythmically with a broad peak around 0.1 Hz. Periodicity is not statistically significant in most records. We have not found a consistent difference between lobes of the brain, subdural and depth electrodes or sleeping and waking states, in our sampling. Seizures generally raise the mean coherence in all frequencies and may reduce the fluctuations by a ceiling effect. The coherence time series of different bands is positively correlated (0.45 overall); significant non-independence extends for at least two octaves. Coherence fluctuations are quite local; the time series of adjacent electrodes is correlated with that of the nearest neighbor pairs (10 mm) to a coefficient averaging ca. 0.4, falling to ca. 0.2 for neighbors-but-one (20 mm) and to < 0.1 for neighbors-but-two (30 mm). The evidence indicates fine structure in time and space, a dynamic and local determination of this measure of cooperativity. Widely separated frequencies tending to fluctuate together exclude independent oscillators as the general or usual basis of the EEG, although a few rhythms are well known under special conditions. Broadband events may be the more usual generators. Loci only a few mm apart can fluctuate widely in seconds, either in parallel or independently. Scalp EEG coherence cannot be predicted from subdural or deep recordings, or vice versa and intracortical microelectrodes show still greater coherence fluctuation in space and time 1. Widely used computations of chaos and dimensionality, made upon data from scalp or even subdural or depth electrodes, even when reproducible in successive samples, cannot be considered representative of the brain or the given structure or brain state but only of the scale or view (receptive field) of the electrodes used. Relevant to the evolution of more complex brains, which is an outstanding fact of animal evolution, we believe measures of cooperativity are likely to be among the dynamic features by which major evolutionary grades of brains differ. In spite of a large literature on the electroencephalogram, we have an extremely limited picture of the structure of activity in the brain on the scales of millimeters and seconds. In spite of a prevailing view that the principal generators of the compound field potentials in the brain are microscopic, cellular or subcellular and chiefly membrane potentials, our extant data base is mainly scalp recordings on humans, usually 40 mm apart, each conservatively estimated to take the vector sum of activity in ca. 15 million cells, assuming a volume 1.5 mm deep x 10 x 10 mm tangentially at 50,000 neurons per cubic millimeter and an equal number of glia. In spite of an extensive knowledge of cellular interaction by synaptic mechanisms, our quantitative understanding of the amount of interaction by this route versus electrotonic or chemical field effects is almost nil. Under such circumstances, we consider quite vulnerable such concepts as synchronization, resonance, rhythmicity and independence of frequency components of the EEG - each an inference, but seldom measured. The present report is one of a series aiming at some insight into the fine structure in space and time of the dynamical signs provided in the compound field potentials, as recorded directly on or in the brain 1, 2, 3, 4, 5, 6, 7, 8, 9. The main goal is to test the hypothesis that one measure of cooperativity at each frequency, namely coherence, varies in time on the scale of seconds or fractions of a second, with evidence of more than stochastic structure and that the fluctuation is different for closely spaced loci. We will show in addition that a wide range of frequency components of the EEG tend to covary in coherence, contrary to the usual assumption of independent oscillators.
- Published
- 1995
49. On the analysis and interpretation of inhomogeneous quadratic forms as receptive fields
- Author
-
Berkes, Pietro and Wiskott, Laurenz
- Subjects
Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Computer Science: Neural Nets ,Neural Modelling ,Computational Neuroscience ,Neural Nets - Abstract
In this paper we introduce some mathematical and numerical tools to analyze and interpret inhomogeneous quadratic forms. The resulting characterization is in some aspects similar to that given by experimental studies of cortical cells, making it particularly suitable for application to second-order approximations and theoretical models of physiological receptive fields. We first discuss two ways of analyzing a quadratic form by visualizing the coefficients of its quadratic and linear term directly and by considering the eigenvectors of its quadratic term. We then present an algorithm to compute the optimal excitatory and inhibitory stimuli, i.e. the stimuli that maximize and minimize the considered quadratic form, respectively, given a fixed energy constraint. The analysis of the optimal stimuli is completed by considering their invariances, which are the transformations to which the quadratic form is most insensitive. We introduce a test to determine which of these are statistically significant. Next we propose a way to measure the relative contribution of the quadratic and linear term to the total output of the quadratic form. Furthermore, we derive simpler versions of the above techniques in the special case of a quadratic form without linear term and discuss the analysis of such functions in previous theoretical and experimental studies. In the final part of the paper we show that for each quadratic form it is possible to build an equivalent two-layer neural network, which is compatible with (but more general than) related networks used in some recent papers and with the energy model of complex cells. We show that the neural network is unique only up to an arbitrary orthogonal transformation of the excitatory and inhibitory subunits in the first layer.
- Published
- 2005
50. Electric and magnetic fields inside neurons and their impact upon the cytoskeletal microtubules
- Author
-
Georgiev, Danko
- Subjects
Neuroscience: Biophysics ,Neuroscience: Neural Modelling ,Neuroscience: Computational Neuroscience ,Biophysics ,Neural Modelling ,Computational Neuroscience - Abstract
If we want to better understand how the microtubules can translate and input the information carried by the electrophysiologic impulses that enter the brain cortex, a detailed investigation of the local electromagnetic field structure is needed. In this paper are assessed the electric and the magnetic field strengths in different neuronal compartments. The calculated results are verified via experimental data comparison. It is shown that the magnetic field is too weak to input information to microtubules and no Hall effect, respectively QHE is realistic. Local magnetic flux density is less than 1/300 of the Earth’s magnetic field that’s why any magnetic signal will be suffocated by the surrounding noise. In contrast the electric field carries biologically important information and acts upon voltage-gated transmembrane ion channels that control the neuronal action potential. If mind is linked to subneuronal processing of information in the brain microtubules then microtubule interaction with the local electric field, as input source of information is crucial. The intensity of the electric field is estimated to be 10V/m inside the neuronal cytoplasm however the details of the tubulin-electric field interaction are still unknown. A novel hypothesis stressing on the tubulin C-termini intraneuronal function is presented replacing the current flawed models (Tuszynski 2003, Mershin 2003, Hameroff 2003, Porter 2003) presented at the Quantum Mind II Conference held at Tucson, Arizona, 15-19 March 2003, that are shown in this presentation to be biologically and physically inconsistent.
- Published
- 2003
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.