119 results on '"Machens, Christian K."'
Search Results
2. Disentangling the flow of signals between populations of neurons
- Author
-
Gokcen, Evren, Jasper, Anna I., Semedo, João D., Zandvakili, Amin, Kohn, Adam, Machens, Christian K., and Yu, Byron M.
- Published
- 2022
- Full Text
- View/download PDF
3. Action suppression reveals opponent parallel control via striatal circuits
- Author
-
Cruz, Bruno F., Guiomar, Gonçalo, Soares, Sofia, Motiwala, Asma, Machens, Christian K., and Paton, Joseph J.
- Published
- 2022
- Full Text
- View/download PDF
4. Efficient coding of cognitive variables underlies dopamine response and choice behavior
- Author
-
Motiwala, Asma, Soares, Sofia, Atallah, Bassam V., Paton, Joseph J., and Machens, Christian K.
- Published
- 2022
- Full Text
- View/download PDF
5. Feedforward and feedback interactions between visual cortical areas use different population activity patterns
- Author
-
Semedo, João D., Jasper, Anna I., Zandvakili, Amin, Krishna, Aravind, Aschner, Amir, Machens, Christian K., Kohn, Adam, and Yu, Byron M.
- Published
- 2022
- Full Text
- View/download PDF
6. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks.
- Author
-
Podlaski, William F. and Machens, Christian K.
- Subjects
- *
NONLINEAR functions , *FEEDFORWARD neural networks , *RECURRENT neural networks , *INTERNEURONS , *CONVEX functions , *NEURAL circuitry - Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Flexible Control of Mutual Inhibition: A Neural Model of Two-Interval Discrimination
- Author
-
Machens, Christian K., Romo, Ranulfo, and Brody, Carlos D.
- Published
- 2005
8. Silencing direct cortico-cortical feedback differentially modulates visual stimulus representations based on their temporal and spatial characteristics
- Author
-
Fioreze, Gabriela, Horno, Oihane, Fridman, Marina, Machens, Christian K., and Petreanu, Leopoldo
- Subjects
Computational Neuroscience ,Sensory processing and perception - Abstract
Bernstein Conference 2022 abstract. http://bernstein-conference.de
- Published
- 2022
- Full Text
- View/download PDF
9. A coordinated spiking network model of the hippocampus accounts for remapping and inhibitory perturbations
- Author
-
Martín-Sánchez, Guillermo, Podlaski, William F., and Machens, Christian K.
- Subjects
Computational Neuroscience ,Networks, dynamical systems - Abstract
Bernstein Conference 2022 abstract. http://bernstein-conference.de
- Published
- 2022
- Full Text
- View/download PDF
10. Modeling the dynamics of inter-areal communication
- Author
-
Carmona, Joana, Yu, Byron M., Kohn, Adam, and Machens, Christian K.
- Subjects
Computational Neuroscience ,Networks, dynamical systems - Abstract
Bernstein Conference 2022 abstract. http://bernstein-conference.de
- Published
- 2022
- Full Text
- View/download PDF
11. A living display system resolved pixel by pixel
- Author
-
Jouary, Adrien and Machens, Christian K.
- Published
- 2018
- Full Text
- View/download PDF
12. Building the Human Brain
- Author
-
Machens, Christian K.
- Published
- 2012
- Full Text
- View/download PDF
13. Coding for the immediate future with heterogeneous postsynaptic potentials
- Author
-
Couras, Juliana, Calaim, Nuno, and Machens, Christian K.
- Subjects
Computational Neuroscience ,Computer Science::Programming Languages ,Mathematics::Representation Theory ,Quantitative Biology::Genomics ,Networks, dynamical systems - Abstract
Bernstein Conference 2021 abstract. http://bernstein-conference.de
- Published
- 2021
- Full Text
- View/download PDF
14. How spiking neural networks can flexibly trade-off performance and energy use
- Author
-
Podlaski, William F., Keemink, Sander W., Calaim, Nuno, and Machens, Christian K.
- Subjects
Computational Neuroscience ,Computer Science::Programming Languages ,Mathematics::Representation Theory ,Quantitative Biology::Genomics ,Networks, dynamical systems - Abstract
Bernstein Conference 2021 abstract. http://bernstein-conference.de
- Published
- 2021
- Full Text
- View/download PDF
15. Modeling Single-Neuron Dynamics and Computations: A Balance of Detail and Abstraction
- Author
-
Herz, Andreas V. M., Gollisch, Tim, Machens, Christian K., and Jaeger, Dieter
- Published
- 2006
- Full Text
- View/download PDF
16. The geometry of robustness in spiking neural networks.
- Author
-
Calaim, Nuno, Dehmelt, Florian A., Gonçalves, Pedro J., and Machens, Christian K.
- Published
- 2022
- Full Text
- View/download PDF
17. Introducing conductance-based dynamics in spike-coding networks preserves efficient and accurate network representation
- Author
-
Brands, Amber M., Keemink, Sander W., and Machens, Christian K.
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems - Published
- 2019
- Full Text
- View/download PDF
18. Persistent activity explained through resource constraints: A normative model of delay activity in prefrontal cortex
- Author
-
Berger, Severin and Machens, Christian K.
- Subjects
Computational Neuroscience ,Attention, reward, decision making - Published
- 2019
- Full Text
- View/download PDF
19. Searching for Optimal Sensory Signals: Iterative Stimulus Reconstruction in Closed-Loop Experiments
- Author
-
Edin, Fredrik, Machens, Christian K., Schütze, Hartmut, and Herz, Andreas V.M.
- Published
- 2004
- Full Text
- View/download PDF
20. Population-wide distributions of neural activity during perceptual decision-making
- Author
-
Wohrer, Adrien, Humphries, Mark D., and Machens, Christian K.
- Published
- 2013
- Full Text
- View/download PDF
21. NEUROSCIENCE; Building the Human Brain
- Author
-
Machens, Christian K.
- Published
- 2012
22. Learning to represent signals spike by spike.
- Author
-
Brendel, Wieland, Bourdoukan, Ralph, Vertechi, Pietro, Machens, Christian K., and Denève, Sophie
- Subjects
RECURRENT neural networks ,DISTRIBUTION costs - Abstract
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Use coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks' convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations. Author summary: Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning as typically observed in cortex. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. In this study, we show how the required architecture can be learnt. We derive local and biophysically plausible learning rules for a recurrent neural networks from first principles. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Decoding and encoding (de)mixed population responses.
- Author
-
Keemink, Sander W and Machens, Christian K
- Subjects
- *
PARAMETERS (Statistics) , *DATA reduction , *BIOPHYSICS - Abstract
• Linear dimensionality reduction methods provide neat summaries of population responses. • Demixing methods additionally demix responses relative to experimental parameters. • Smooth mappings of experimental parameters onto flat neural manifolds boost demixability. • Neural networks with low-rank connectivities can produce demixed manifolds. A central tenet of neuroscience is that the brain works through large populations of interacting neurons. With recent advances in recording techniques, the inner working of these populations has come into full view. Analyzing the resulting large-scale data sets is challenging because of the often complex and 'mixed' dependency of neural activities on experimental parameters, such as stimuli, decisions, or motor responses. Here we review recent insights gained from analyzing these data with dimensionality reduction methods that 'demix' these dependencies. We demonstrate that the mappings from (carefully chosen) experimental parameters to population activities appear to be typical and stable across tasks, brain areas, and animals, and are often identifiable by linear methods. By considering when and why dimensionality reduction and demixing work well, we argue for a view of population coding in which populations represent (demixed) latent signals, corresponding to stimuli, decisions, motor responses, and so on. These latent signals are encoded into neural population activity via non-linear mappings and decoded via linear readouts. We explain how such a scheme can facilitate the propagation of information across cortical areas, and we review neural network architectures that can reproduce the encoding and decoding of latent signals in population activities. These architectures promise a link from the biophysics of single neurons to the activities of neural populations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Demixed principal component analysis of population activity in higher cortical areas reveals independent representation of task parameters
- Author
-
Kobak, Dmitry, Brendel, Wieland, Constantinidis, Christos, Feierstein, Claudia E., Kepecs, Adam, Mainen, Zachary F., Romo, Ranulfo, Qi, Xue-Lian, Uchida, Naoshige, and Machens, Christian K.
- Subjects
FOS: Computer and information sciences ,Statistics - Machine Learning ,FOS: Biological sciences ,Quantitative Biology - Neurons and Cognition ,Neurons and Cognition (q-bio.NC) ,Machine Learning (stat.ML) - Abstract
Neurons in higher cortical areas, such as the prefrontal cortex, are known to be tuned to a variety of sensory and motor variables. The resulting diversity of neural tuning often obscures the represented information. Here we introduce a novel dimensionality reduction technique, demixed principal component analysis (dPCA), which automatically discovers and highlights the essential features in complex population activities. We reanalyze population data from the prefrontal areas of rats and monkeys performing a variety of working memory and decision-making tasks. In each case, dPCA summarizes the relevant features of the population response in a single figure. The population activity is decomposed into a few demixed components that capture most of the variance in the data and that highlight dynamic tuning of the population to various task parameters, such as stimuli, decisions, rewards, etc. Moreover, dPCA reveals strong, condition-independent components of the population activity that remain unnoticed with conventional approaches., 23 pages, 6 figures + supplementary information (21 pages, 15 figures)
- Published
- 2014
25. Optimal compensation for neuron loss.
- Author
-
Barrett, David G. T., Denève, Sophie, and Machens, Christian K.
- Published
- 2016
26. Demixed principal component analysis of neural population data.
- Author
-
Kobak, Dmitry, Brendel, Wieland, Constantinidis, Christos, Feierstein, Claudia E., Kepecs, Adam, Mainen, Zachary F., Xue-Lian Qi, Romo, Ranulfo, Uchida, Naoshige, and Machens, Christian K.
- Published
- 2016
- Full Text
- View/download PDF
27. On the Number of Neurons and Time Scale of Integration Underlying the Formation of Percepts in the Brain.
- Author
-
Wohrer, Adrien and Machens, Christian K.
- Subjects
- *
NEURAL transmission , *SENSORY neurons , *SENSORY perception , *DECISION making , *PSYCHOMETRICS , *SIGNAL-to-noise ratio , *NEURAL circuitry - Abstract
All of our perceptual experiences arise from the activity of neural populations. Here we study the formation of such percepts under the assumption that they emerge from a linear readout, i.e., a weighted sum of the neurons’ firing rates. We show that this assumption constrains the trial-to-trial covariance structure of neural activities and animal behavior. The predicted covariance structure depends on the readout parameters, and in particular on the temporal integration window w and typical number of neurons K used in the formation of the percept. Using these predictions, we show how to infer the readout parameters from joint measurements of a subject’s behavior and neural activities. We consider three such scenarios: (1) recordings from the complete neural population, (2) recordings of neuronal sub-ensembles whose size exceeds K, and (3) recordings of neuronal sub-ensembles that are smaller than K. Using theoretical arguments and artificially generated data, we show that the first two scenarios allow us to recover the typical spatial and temporal scales of the readout. In the third scenario, we show that the readout parameters can only be recovered by making additional assumptions about the structure of the full population activity. Our work provides the first thorough interpretation of (feed-forward) percept formation from a population of sensory neurons. We discuss applications to experimental recordings in classic sensory decision-making tasks, which will hopefully provide new insights into the nature of perceptual integration. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
28. Variability in neural activity and behavior.
- Author
-
Renart, Alfonso and Machens, Christian K
- Subjects
- *
NEURAL physiology , *PSYCHOLOGICAL research , *NEUROPLASTICITY , *BRAIN models , *NEUROSCIENCES , *CELL physiology - Abstract
Highlights: [•] Insufficient knowledge by the experimenter results in neural variability. [•] What counts as variability for the experimenter and for the organism may be different. [•] The sources of variability can be targeted through modeling studies. [•] Variability may have an adaptive functional role. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. Optogenetic perturbations reveal the dynamics of an oculomotor integrator.
- Author
-
Gonçalves, Pedro J., Arrenberg, Aristides B., Hablitzel, Bastian, Baier, Herwig, and Machens, Christian K.
- Subjects
OPTOGENETICS ,EYE movements ,ZEBRA danio ,HALORHODOPSIN ,NEURAL circuitry - Abstract
Many neural systems can store short-term information in persistently firing neurons. Such persistent activity is believed to be maintained by recurrent feedback among neurons. This hypothesis has been fleshed out in detail for the oculomotor integrator (OI) for which the so-called "line attractor" network model can explain a large set of observations. Here we show that there is a plethora of such models, distinguished by the relative strength of recurrent excitation and inhibition. In each model, the firing rates of the neurons relax toward the persistent activity states. The dynamics of relaxation can be quite different, however, and depend on the levels of recurrent excitation and inhibition. To identify the correct model, we directly measure these relaxation dynamics by performing optogenetic perturbations in the OI of zebrafish expressing halorhodopsin or channelrhodopsin. We show that instantaneous, inhibitory stimulations of the OI lead to persistent, centripetal eye position changes ipsilateral to the stimulation. Excitatory stimulations similarly cause centripetal eye position changes, yet only contralateral to the stimulation. These results show that the dynamics of the OI are organized around a central attractor state-the null position of the eyes-which stabilizes the system against random perturbations. Our results pose new constraints on the circuit connectivity of the system and provide new insights into the mechanisms underlying persistent activity. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. Predictive Coding of Dynamical Variables in Balanced Spiking Networks.
- Author
-
Boerlin, Martin, Machens, Christian K., and Denève, Sophie
- Subjects
- *
CEREBRAL cortex , *NEURAL circuitry , *NEURAL physiology , *EXCITATION (Physiology) , *NEUROSCIENTISTS - Abstract
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
31. Disentangling the functional consequences of the connectivity between optic-flow processing neurons.
- Author
-
Weber, Franz, Machens, Christian K., and Borst, Alexander
- Subjects
- *
COUPLING reactions (Chemistry) , *SINGLE cell proteins , *BLOWFLIES , *STIMULUS & response (Biology) , *DENDRITES - Abstract
Typically, neurons in sensory areas are highly interconnected. Coupling two neurons can synchronize their activity and affect a variety of single-cell properties, such as their stimulus tuning, firing rate or gain. All of these factors must be considered to understand how two neurons should be coupled to optimally process stimuli. We quantified the functional effect of an interaction between two optic-flow processing neurons (Vi and H1) in the fly (Lucilia sericata). Using a generative model, we estimated a uni-directional coupling from H1 to Vi. Especially at a low signal-to-noise ratio (SNR), the coupling strongly improved the information about optic-flow in Vi. We identified two constraints confining the strength of the interaction. First, for weak couplings, Vi benefited from inputs by H1 without a concomitant shift of its stimulus tuning. Second, at both low and high SNR, the coupling strength lay in a range in which the information carried by single spikes is optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
32. Spatiotemporal Response Properties of Optic-Flow Processing Neurons
- Author
-
Weber, Franz, Machens, Christian K., and Borst, Alexander
- Subjects
- *
NEURONS , *CONDITIONED response , *VISUAL evoked response , *SPATIAL ability , *OPTIC nerve , *HUMAN information processing , *NEUROSCIENCES - Abstract
Summary: A central goal in sensory neuroscience is to fully characterize a neuron''s input-output relation. However, strong nonlinearities in the responses of sensory neurons have made it difficult to develop models that generalize to arbitrary stimuli. Typically, the standard linear-nonlinear models break down when neurons exhibit stimulus-dependent modulations of their gain or selectivity. We studied these issues in optic-flow processing neurons in the fly. We found that the neurons'' receptive fields are fully described by a time-varying vector field that is space-time separable. Increasing the stimulus strength, however, strongly reduces the neurons'' gain and selectivity. To capture these changes in response behavior, we extended the linear-nonlinear model by a biophysically motivated gain and selectivity mechanism. We fit all model parameters directly to the data and show that the model now characterizes the neurons'' input-output relation well over the full range of motion stimuli. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
33. Functional, But Not Anatomical, Separation of "What" and "When" in Prefrontal Cortex.
- Author
-
Machens, Christian K., Romo, Ranulfo, and Brody, Carlos D.
- Subjects
- *
PREFRONTAL cortex , *FUNCTIONAL analysis , *LABORATORY monkeys , *ANIMAL models in research , *ANIMAL memory , *SHORT-term memory , *REWARD (Psychology) , *BIOLOGICAL neural networks , *PHYSIOLOGY - Abstract
How does the brain store information over a short period of time? Typically, the short-term memory of items or values is thought to be stored in the persistent activity of neurons in higher cortical areas. However, the activity of these neurons often varies strongly in time, even if time is unimportant for whether or not rewards are received. To elucidate this interaction of time and memory, we reexamined the activity of neurons in the prefrontal cortex of monkeys performing a working memory task. As often observed in higher cortical areas, different neurons have highly heterogeneous patterns of activity, making interpretation of the data difficult. To overcome these problems, we developed a method that finds a new representation of the data in which heterogeneity is much reduced, and time- and memory-related activities became separate and easily interpretable. This new representation consists of a few fundamental activity components that capture 95%of the firing rate variance of >800 neurons. Surprisingly, the memory-related activity components account for <20% of this firing rate variance. The observed heterogeneity of neural responses results from random combinations of these fundamental components. Based on these components, we constructed a generative linear model of the network activity. The model suggests that the representations of time and memory are maintained by separate mechanisms, even while sharing a common anatomical substrate. Testable predictions of this hypothesis are proposed. We suggest that our method may be applied to data from other tasks in which neural responses are highly heterogeneous across neurons, and dependent on more than one variable. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
34. Design of Continuous Attractor Networks with Monotonic Tuning Using a Symmetry Principle.
- Author
-
Machens, Christian K. and Brody, Carlos D.
- Subjects
- *
NERVOUS system , *NEURONS , *REPRODUCTION , *CELLS , *PHYSIOLOGY , *EMBRYOLOGY - Abstract
Neurons that sustain elevated firing in the absence of stimuli have been found in many neural systems. In graded persistent activity, neurons can sustain firing at many levels, suggesting a widely found type of network dynamics in which networks can relax to any one of a continuum of stationary states. The reproduction of these findings in model networks of nonlinear neurons has turned out to be nontrivial. A particularly insightful model has been the "bump attractor," in which a continuous attractor emerges through an underlying symmetry in the network connectivity matrix. This model, however, cannot account for data in which the persistent firing of neurons is a monotonic-rather than a bell-shaped-function of a stored variable. Here, we show that the symmetry used in the bump attractor network can be employed to create a whole family of continuous attractor networks, including those with monotonic tuning. Our design is based on tuning the external inputs to networks that have a connectivity matrix with Toeplitz symmetry. In particular, we provide a complete analytical solution of a line attractor network with monotonic tuning and show that for many other networks, the numerical tuning of synaptic weights reduces to the computation of a single parameter. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
35. Testing the Efficiency of Sensory Coding with Optimal Stimulus Ensembles
- Author
-
Machens, Christian K., Gollisch, Tim, Kolesnikova, Olga, and Herz, Andreas V.M.
- Subjects
- *
NEURONS , *NERVOUS system , *SENSORY neurons , *NEUROSCIENCES - Abstract
Summary: According to Barlow’s seminal “efficient coding hypothesis,” the coding strategy of sensory neurons should be matched to the statistics of stimuli that occur in an animal’s natural habitat. Using an automatic search technique, we here test this hypothesis and identify stimulus ensembles that sensory neurons are optimized for. Focusing on grasshopper auditory receptor neurons, we find that their optimal stimulus ensembles differ from the natural environment, but largely overlap with a behaviorally important sub-ensemble of the natural sounds. This indicates that the receptors are optimized for peak rather than average performance. More generally, our results suggest that the coding strategies of sensory neurons are heavily influenced by differences in behavioral relevance among natural stimuli. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
36. Linearity of Cortical Receptive Fields Measured with Natural Sounds.
- Author
-
Machens, Christian K., Wehr, Michael S., and Zador, Anthony M.
- Subjects
- *
NEURONS , *AUDITORY cortex , *TEMPORAL lobe , *NEURAL transmission , *AUDITORY perception - Abstract
How do cortical neurons represent the acoustic environment? This question is often addressed by probing with simple stimuli such as clicks or tone pips. Such stimuli have the advantage of yielding easily interpreted answers, but have the disadvantage that they may fail to uncover complex or higher-order neuronal response properties. Here, we adopt an alternative approach, probing neuronal responses with complex acoustic stimuli, including animal vocalizations. We used in vivo whole-cell methods in the rat auditory cortex to record subthreshold membrane potential fluctuations elicited by these stimuli. Most neurons responded robustly and reliably to the complex stimuli in our ensemble. Using regularization techniques, we estimated the linear component, the spectrotemporal receptive field (STRF), of the transformation from the sound (as represented by its time-varying spectrogram) to the membrane potential of the neuron. We find that the STRF has a rich dynamical structure, including excitatory regions positioned in general accord with the prediction of the classical tuning curve. However, whereas the STRF successfully predicts the responses to some of the natural stimuli, it surprisingly fails completely to predict the responses to others; on average, only 11% of the response power could be predicted by the STRF. Therefore, most of the response of the neuron cannot be predicted by the linear component, although the response is deterministically related to the stimulus. Analysis of the systematic errors of the STRF model shows that this failure cannot be attributed to simple nonlinearities such as adaptation to mean intensity, rectification, or saturation. Rather, the highly nonlinear response properties of auditory cortical neurons must be attributable to nonlinear interactions between sound frequencies and time-varying properties of the neural encoder. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
37. Energy-Efficient Coding with Discrete Stochastic Events.
- Author
-
Schreiber, Susanne, Machens, Christian K., Herz, Andreas V. M., and Laughlin, Simon B.
- Subjects
- *
ION channels , *CELLULAR signal transduction , *STOCHASTIC processes - Abstract
We investigate the energy efficiency of signaling mechanisms that transfer information by means of discrete stochastic events, such as the opening or closing of an ion channel. Using a simple model for the generation of graded electrical signals by sodium and potassium channels, we find optimum numbers of channels that maximize energy efficiency. The optima depend on several factors: the relative magnitudes of the signaling cost (current flow through channels), the fixed cost of maintaining the system, the reliability of the input, additional sources of noise, and the relative costs of upstream and downstream mechanisms. We also analyze how the statistics of input signals influence energy efficiency. We find that energy-efficient signal ensembles favor a bimodal distribution of channel activations and contain only a very small fraction of large inputs when energy is scarce. We conclude that when energy use is a significant constraint, trade-offs between information transfer and energy can strongly influence the number of signaling molecules and synapses used by neurons and the manner in which these mechanisms represent information. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
38. Statistical methods for dissecting interactions between brain areas.
- Author
-
Semedo, João D, Gokcen, Evren, Machens, Christian K, Kohn, Adam, and Yu, Byron M
- Subjects
- *
INFANTS , *NEUROSCIENCES , *NEURONS - Abstract
• A fundamental question in neuroscience is how neuronal populations in different brain areas communicate. • Multi-area neuronal population recordings are increasingly common. • Multivariate methods are needed to interrogate these recordings and characterize how neuronal population activity is related across areas. • We discuss interpretational challenges and considerations for these methods, and suggest directions for future work. The brain is composed of many functionally distinct areas. This organization supports distributed processing, and requires the coordination of signals across areas. Our understanding of how populations of neurons in different areas interact with each other is still in its infancy. As the availability of recordings from large populations of neurons across multiple brain areas increases, so does the need for statistical methods that are well suited for dissecting and interrogating these recordings. Here we review multivariate statistical methods that have been, or could be, applied to this class of recordings. By leveraging population responses, these methods can provide a rich description of inter-areal interactions. At the same time, these methods can introduce interpretational challenges. We thus conclude by discussing how to interpret the outputs of these methods to further our understanding of inter-areal interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Network oscillations in a neural mass model induced by metabolic modulation are consistent with EEG data of neocortical epileptic seizure onset.
- Author
-
Dehmelt, Florian A. and Machens, Christian K.
- Subjects
- *
NEURONS , *EPILEPSY , *HIGH-frequency ventilation (Therapy) - Abstract
An abstract of the article "Network oscillations in a neural mass model induced by metabolic modulation are consistent with EEG data of neocortical epileptic seizure onset" by Florian A. Dehmelt and Christian K. Machens is presented.
- Published
- 2013
- Full Text
- View/download PDF
40. Positive sparse coding of natural images: a theory for simple cell tuning.
- Author
-
Barrett, David G. T., Denève, Sophie, and Machens, Christian K.
- Subjects
CELLS ,VISUAL cortex - Abstract
An abstract of the article "Positive sparse coding of natural images: a theory for simple cell tuning" by David G. T. Barrett, Sophie Denève and Christian K. Machens is presented.
- Published
- 2013
- Full Text
- View/download PDF
41. Percept and the single neuron.
- Author
-
Wohrer, Adrien and Machens, Christian K
- Subjects
- *
SENSORY perception , *NEURONS , *COMPUTER simulation , *PROBABILITY theory - Abstract
The article focuses on the connection between neural readout weights and the perceptual decision. It states the researcher R. M. Hafner and colleagues have derived solutions on the connection of choice probability and the readout weights through the use of computer simulations. It mentions the insights of Haefner and colleagues concerning the formula of choice probability, such as the concept of optimality and the reconstruction of information of readout weights.
- Published
- 2013
- Full Text
- View/download PDF
42. Auditory Modeling Gets an Edge.
- Author
-
Machens, Christian K. and Zador, Anthony
- Subjects
- *
AUDITORY cortex , *AUDITORY perception , *NEUROPHYSIOLOGY , *NEUROBIOLOGY , *PHYSIOLOGY , *MODELS & modelmaking - Abstract
Introduces the December 2003 issue of the periodical "Journal of Neurophysiology." Extended Fishbach, Nelken and Yeshurun (FNY) auditory cortex model; Simplicity of the extended FNY model; Topographical organization of the FNY model.
- Published
- 2003
- Full Text
- View/download PDF
43. Single auditory neurons rapidly discriminate conspecific communication signals.
- Author
-
Machens, Christian K., Schutze, Hartmut, Franz, Astrid, Kolesnikova, Olga, Stemmler, Martin B., Ronacher, Bernhard, and Herz, Andreas V. M.
- Subjects
- *
NEURONS , *ACOUSTIC reflex , *COMMUNICATION , *GRASSHOPPERS - Abstract
Animals that rely on acoustic communication to find mates, such as grasshoppers, are astonishingly accurate in recognizing song patterns that are specific to their own species. This raises the question of whether they can also solve a far more complicated task that might provide a basis for mate preference and sexual selection: to distinguish individual songs by detecting slight variations around the common species-specific theme. Using spike-train discriminability to quantify the precision of neural responses from the auditory periphery of a model grasshopper species, we show that information sufficient to distinguish songs is readily available at the single-cell level when the spike trains are analyzed on a millisecond time scale. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
44. From response to stimulus: adaptive sampling in sensory physiology
- Author
-
Benda, Jan, Gollisch, Tim, Machens, Christian K, and Herz, Andreas VM
- Subjects
- *
SENSE organs , *NEURONS , *ADAPTIVE sampling (Statistics) , *DATA analysis , *MATHEMATICAL models - Abstract
Sensory systems extract behaviorally relevant information from a continuous stream of complex high-dimensional input signals. Understanding the detailed dynamics and precise neural code, even of a single neuron, is therefore a non-trivial task. Automated closed-loop approaches that integrate data analysis in the experimental design ease the investigation of sensory systems in three directions: First, adaptive sampling speeds up the data acquisition and thus increases the yield of an experiment. Second, model-driven stimulus exploration improves the quality of experimental data needed to discriminate between alternative hypotheses. Third, information-theoretic data analyses open up novel ways to search for those stimuli that are most efficient in driving a given neuron in terms of its firing rate or coding quality. Examples from different sensory systems show that, in all three directions, substantial progress can be achieved once rapid online data analysis, adaptive sampling, and computational modeling are tightly integrated into experiments. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
45. Principles of Corticocortical Communication: Proposed Schemes and Design Considerations.
- Author
-
Kohn, Adam, Jasper, Anna I., Semedo, João D., Gokcen, Evren, Machens, Christian K., and Yu, Byron M.
- Subjects
- *
VISUAL cortex - Abstract
Nearly all brain functions involve routing neural activity among a distributed network of areas. Understanding this routing requires more than a description of interareal anatomical connectivity: it requires understanding what controls the flow of signals through interareal circuitry and how this communication might be modulated to allow flexible behavior. Here we review proposals of how communication, particularly between visual cortical areas, is instantiated and modulated, highlighting recent work that offers new perspectives. We suggest transitioning from a focus on assessing changes in the strength of interareal interactions, as often seen in studies of interareal communication, to a broader consideration of how different signaling schemes might contribute to computation. To this end, we discuss a set of features that might be desirable for a communication scheme. Corticocortical communication is a fundamental aspect of brain function. Flexible behavior suggests a need for modulating interareal signaling from moment to moment. Several schemes for modulating corticocortical communication have been proposed. These include altering the structure of activity within a source network, the sensitivity of a target network to the input it receives, or gating signals during the relay between areas. We review these schemes and highlight new proposals that suggest communication may be determined by how source population signals align with interareal communication subspaces. We propose a set of design considerations for evaluating the relative merits of different communication schemes. When examining interareal communication, we suggest moving beyond merely characterizing changes in the strength of interareal interactions, to a wider consideration of the computational benefits and limitations of different communication schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Cortical Areas Interact through a Communication Subspace.
- Author
-
Semedo, João D., Zandvakili, Amin, Machens, Christian K., Yu, Byron M., and Kohn, Adam
- Subjects
- *
VISUAL cortex , *NEURONS , *INFANTS - Abstract
Summary Most brain functions involve interactions among multiple, distinct areas or nuclei. For instance, visual processing in primates requires the appropriate relaying of signals across many distinct cortical areas. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Here we investigate how trial-to-trial fluctuations of population responses in primary visual cortex (V1) are related to simultaneously recorded population responses in area V2. Using dimensionality reduction methods, we find that V1-V2 interactions occur through a communication subspace: V2 fluctuations are related to a small subset of V1 population activity patterns, distinct from the largest fluctuations shared among neurons within V1. In contrast, interactions between subpopulations within V1 are less selective. We propose that the communication subspace may be a general, population-level mechanism by which activity can be selectively routed across brain areas. Highlights • Visual cortical areas interact through a communication subspace (CS) • The CS defines which activity patterns in a source area relate to downstream activity • The largest activity patterns in a source area are not matched to the CS • The CS allows for selective and flexible routing of population signals between areas Most brain functions require the selective and flexible routing of neuronal activity between cortical areas. Using paired population recordings from multiple visual cortical areas, Semedo et al. find a population-level mechanism that can achieve this routing, termed a communication subspace. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. Distributed and Mixed Information in Monosynaptic Inputs to Dopamine Neurons.
- Author
-
Tian, Ju, Huang, Ryan, Cohen, Jeremiah Y., Osakada, Fumitaka, Kobak, Dmitry, Machens, Christian K., Callaway, Edward M., Uchida, Naoshige, and Watabe-Uchida, Mitsuko
- Subjects
- *
DOPAMINERGIC neurons , *SYNAPSES , *ELECTROPHYSIOLOGY , *OPTOGENETICS , *NEURAL circuitry , *MATHEMATICAL ability - Abstract
Summary Dopamine neurons encode the difference between actual and predicted reward, or reward prediction error (RPE). Although many models have been proposed to account for this computation, it has been difficult to test these models experimentally. Here we established an awake electrophysiological recording system, combined with rabies virus and optogenetic cell-type identification, to characterize the firing patterns of monosynaptic inputs to dopamine neurons while mice performed classical conditioning tasks. We found that each variable required to compute RPE, including actual and predicted reward, was distributed in input neurons in multiple brain areas. Further, many input neurons across brain areas signaled combinations of these variables. These results demonstrate that even simple arithmetic computations such as RPE are not localized in specific brain areas but, rather, distributed across multiple nodes in a brain-wide network. Our systematic method to examine both activity and connectivity revealed unexpected redundancy for a simple computation in the brain. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
48. Representational geometry of perceptual decisions in the monkey parietal cortex.
- Author
-
Okazawa, Gouki, Hatch, Christina E., Mancoo, Allan, Machens, Christian K., and Kiani, Roozbeh
- Subjects
- *
MONKEYS , *LEGAL judgments , *EYE movements , *PREFRONTAL cortex , *GEOMETRY - Abstract
Lateral intraparietal (LIP) neurons represent formation of perceptual decisions involving eye movements. In circuit models for these decisions, neural ensembles that encode actions compete to form decisions. Consequently, representation and readout of the decision variables (DVs) are implemented similarly for decisions with identical competing actions, irrespective of input and task context differences. Further, DVs are encoded as partially potentiated action plans through balance of activity of action-selective ensembles. Here, we test those core principles. We show that in a novel face-discrimination task, LIP firing rates decrease with supporting evidence, contrary to conventional motion-discrimination tasks. These opposite response patterns arise from similar mechanisms in which decisions form along curved population-response manifolds misaligned with action representations. These manifolds rotate in state space based on context, indicating distinct optimal readouts for different tasks. We show similar manifolds in lateral and medial prefrontal cortices, suggesting similar representational geometry across decision-making circuits. [Display omitted] • We examine parietal neural activity in motion and face discrimination tasks • Population neural response dynamics form a curved manifold during decision-making • Manifolds rotates across tasks, changing optimal readout of the decision variable • Circuit models of perceptual decisions need revision to include task dependency Comparison of neuronal activity during decision-making in face and motion discrimination tasks reveals task-dependent response geometry that shapes optimal readout of the decision variable. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. State-dependent geometry of population activity in rat auditory cortex.
- Author
-
Kobak D, Pardo-Vazquez JL, Valente M, Machens CK, and Renart A
- Subjects
- Acoustic Stimulation, Animals, Rats, Auditory Cortex physiology, Auditory Perception
- Abstract
The accuracy of the neural code depends on the relative embedding of signal and noise in the activity of neural populations. Despite a wealth of theoretical work on population codes, there are few empirical characterizations of the high-dimensional signal and noise subspaces. We studied the geometry of population codes in the rat auditory cortex across brain states along the activation-inactivation continuum, using sounds varying in difference and mean level across the ears. As the cortex becomes more activated, single-hemisphere populations go from preferring contralateral loud sounds to a symmetric preference across lateralizations and intensities, gain-modulation effectively disappears, and the signal and noise subspaces become approximately orthogonal to each other and to the direction corresponding to global activity modulations. Level-invariant decoding of sound lateralization also becomes possible in the active state. Our results provide an empirical foundation for the geometry and state-dependence of cortical population codes., Competing Interests: DK, JP, MV, CM, AR No competing interests declared, (© 2019, Kobak et al.)
- Published
- 2019
- Full Text
- View/download PDF
50. Efficient codes and balanced networks.
- Author
-
Denève S and Machens CK
- Subjects
- Animals, Poisson Distribution, Cerebral Cortex physiology, Models, Neurological, Neural Inhibition physiology, Neurons physiology
- Abstract
Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to construct high-dimensional population codes and learn complex functions of their inputs.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.