36 results on '"Helias, Moritz"'
Search Results
2. Cumulant selective perceptron: Propagation of statistical information through a trainable non-linearity
- Author
-
Nestler, Sandra, Helias, Moritz, and Gilson, Matthieu
- Subjects
Computational Neuroscience ,Data analysis, machine learning, neuroinformatics - Abstract
Bernstein Conference 2022 abstract. http://bernstein-conference.de
- Published
- 2022
- Full Text
- View/download PDF
3. Strong recurrency of cortical networks constrains activity in low-dimensional subspaces
- Author
-
Dahmen, David, Recanatesi, Stefano, Jia, Xiaoxuan, Ocker, Gabriel K., Campagnola, Luke, Jarsky, Tim, Seeman, Stephanie, Helias, Moritz, and Shea-Brown, Eric
- Subjects
Computational Neuroscience ,Networks, dynamical systems - Abstract
Bernstein Conference 2022 abstract. http://bernstein-conference.de
- Published
- 2022
- Full Text
- View/download PDF
4. Dimensionality reduction with normalizing flows
- Author
-
Bouss, Peter, Nestler, Sandra, René, Alexandre, and Helias, Moritz
- Subjects
Computational Neuroscience ,Data analysis, machine learning, neuroinformatics - Abstract
Bernstein Conference 2022 abstract. http://bernstein-conference.de
- Published
- 2022
- Full Text
- View/download PDF
5. Statistical decomposition of neural networks: Information transfer between correlation functions
- Author
-
Fischer, Kirsten, René, Alexandre, Keup, Christian, Layer, Moritz, Dahmen, David, and Helias, Moritz
- Subjects
Computational Neuroscience ,Data analysis, machine learning, neuroinformatics - Abstract
Bernstein Conference 2022 abstract. http://bernstein-conference.de
- Published
- 2022
- Full Text
- View/download PDF
6. Run-Time Interoperability Between Neuronal Network Simulators Based on the MUSIC Framework
- Author
-
Djurfeldt, Mikael, Hjorth, Johannes, Eppler, Jochen M., Dudani, Niraj, Helias, Moritz, Potjans, Tobias C., Bhalla, Upinder S., Diesmann, Markus, Hellgren Kotaleski, Jeanette, and Ekeberg, Örjan
- Published
- 2010
- Full Text
- View/download PDF
7. Ultra-high frequency spectrum of neuronal activity
- Author
-
Essink, Simon, Helin, Runar, Shimoura, Renan, Senk, Johanna, Tetzlaff, Tom, van Albada, Sacha, Helias, Moritz, Grün, Sonja, Plesser, Hans Ekkehard, and Diesmann, Markus
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems - Abstract
The activity of spiking network models exhibits fast oscillations (>200 Hz), caused by inhibition-dominated excitatory-inhibitory loops [1, 2]. As correlations between pairs of neurons are weak in nature and models, fast oscillations have so far received little attention.Today’s models of cortical networks with natural numbers of neurons and synapses [3] remove any uncertainty about down-scaling artifacts [4]. Fast oscillations here arise as vertical stripes in raster diagrams. We discuss experimental detectability of oscillations, ask whether they are an artifact of simplified models, and identify adaptations to control them.The population rate spectrum decomposes into single-neuron power spectra (∼N) and cross-spectra of pairs of neurons (∼N2) [5,6]. For low numbers of neurons (100) and weak correlations, the single-neuron spectra dominate the compound spectrum. Coherent oscillations in the population activity may thus go unnoticed in experimental spike recordings. Population measures obtained from large neuron ensembles (e.g., LFP), however, should show a pronounced peak.Cortical network models allow an investigation from different angles. We rule out artifacts of time-discrete simulation and investigate the effect of distributed synaptic delays: exponential distributions decrease the oscillation amplitude, expected by their equivalence to low-pass filtering [7], whereas truncated Gaussian distributions are ineffective.Surprisingly, a model of V1 [8], with the same architecture, but fewer synapses per neuron, does not exhibit fast oscillations. Mean-field theory shows that loops within each inhibitory population cause fast oscillations. Peak frequency and amplitude are determined by eigenvalues of the effective connectivity matrix approaching instability [9]. Reducing the connection density decreases the eigenvalues, increasing their distance to instability; we thus expect weaker oscillations.Counter to expectation and simulation, mean-field theory predicts an increase, explained by an overestimation of the transfer function at high frequencies [10]: the initial network appears to be linearly unstable, with |λ|>1; reduced connectivity seemingly destabilizes the system. A semi-analytical correction restores qualitative agreement with simulation.The work points at the importance of models with realistic cell densities and connectivity, and illustrates the productive interplay of simulation-driven and analytical approaches.References 1. Brunel, N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. JComputNeurosci 8, 183–208 (2000)., 10.1371/journal.pcbi.1006359 2. Brunel, N. & Wang, X.-J. What Determines the Frequency of Fast Network Oscillations With Irregular Neural Discharges? I. Synaptic Dynamics and Excitation-Inhibition Balance. JNeurophysiol 90, 415–430 (2003)., 10.1152/jn.01095.2002 3. Potjans, T. C. & Diesmann, M. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model. CerebCortex 24, 785–806 (2014)., 10.1093/cercor/bhs358 4. van Albada, S. J., Helias, M. & Diesmann, M. Scalability of asynchronous networks is limited by one-to-one mapping between effective connectivity and correlations. ploscb 11, e1004490 (2015)., 10.1371/journal.pcbi.1004490 5. Harris, K. D., & Thiele, A. Cortical state and attention. Nature Reviews Neuroscience, 12(9), 509-523 (2011)., 10.1038/nrn3084 6. Tetzlaff, T., Helias, M., Einevoll, G. T., & Diesmann, M. Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput Biol, 8(8), e1002596 (2012)., 10.1371/journal.pcbi.1002596 7. Mattia, M., Biggio, M., Galluzzi, A. & Storace, M. Dimensional reduction in networks of non-Markovian spiking neurons: Equivalence of synaptic filtering and heterogeneous propagation delays. PLoS Comput Biol 15, e1007404 (2019)., 10.1371/journal.pcbi.1007404 8. Schmidt, M. et al. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. ploscb 14, e1006359 (2018)., 10.1023/a:1008925309027 9. Bos, H., Diesmann, M. & Helias, M. Identifying Anatomical Origins of Coexisting Oscillations in the Cortical Microcircuit. ploscb 12, e1005132 (2016)., 10.1371/journal.pcbi.1005132 10. Schuecker, J., Diesmann, M. & Helias, M. Modulated escape from a metastable state driven by colored noise. Phys. Rev. E (2015)., 10.1103/PhysRevE.92.052119
- Published
- 2020
- Full Text
- View/download PDF
8. Efficient Communication in Distributed Simulations of Spiking Neuronal Networks With Gap Junctions
- Author
-
Jordan, Jakob, Helias, Moritz, Diesmann, Markus, and Kunkel, Susanne
- Subjects
spiking neuronal network ,Quantitative Biology::Neurons and Cognition ,parallel computing ,electrical synapses ,large-scale simulation ,Biomedical Engineering ,Neuroscience (miscellaneous) ,610 Medicine & health ,Computer Science Applications ,ddc:610 ,gap junctions ,Neuroscience ,Original Research ,computational neuroscience - Abstract
Frontiers in neuroinformatics 14, 12 (2020). doi:10.3389/fninf.2020.00012, Published by Frontiers Research Foundation, Lausanne
- Published
- 2020
9. NNMT: Mean-Field Based Analysis Tools for Neuronal Network Models.
- Author
-
Layer, Moritz, Senk, Johanna, Essink, Simon, van Meegen, Alexander, Bos, Hannah, and Helias, Moritz
- Subjects
NEURAL circuitry ,PYTHON programming language ,SPACE exploration ,POWER spectra ,NEURONS ,COMPUTATIONAL neuroscience ,KALMAN filtering - Abstract
Mean-field theory of neuronal networks has led to numerous advances in our analytical and intuitive understanding of their dynamics during the past decades. In order to make mean-field based analysis tools more accessible, we implemented an extensible, easy-to-use open-source Python toolbox that collects a variety of mean-field methods for the leaky integrate-and-fire neuron model. The Neuronal Network Mean-field Toolbox (NNMT) in its current state allows for estimating properties of large neuronal networks, such as firing rates, power spectra, and dynamical stability in mean-field and linear response approximation, without running simulations. In this article, we describe how the toolbox is implemented, show how it is used to reproduce results of previous studies, and discuss different use-cases, such as parameter space explorations, or mapping different network models. Although the initial version of the toolbox focuses on methods for leaky integrate-and-fire neurons, its structure is designed to be open and extensible. It aims to provide a platform for collecting analytical methods for neuronal network model analysis, such that the neuroscientific community can take maximal advantage of them. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Long-Range Neuronal Coordination Near the Breakdown of Linear Stability
- Author
-
Layer, Moritz, Dahmen, David, Helias, Moritz, Deutz, Lukas, Voges, Nicole, Grün, Sonja, Diesmann, Markus, Dabrowska, Paulina, and Papen, Michael von
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems ,Quantitative Biology::Neurons and Cognition - Abstract
Experimental findings suggest that cortical networks operate in a balanced state [1] in which strong recurrent inhibition suppresses single cell input correlations [2,3]. The balanced state, however, only restricts the average correlations in the network, the distribution of correlations between individual neurons is not constrained. We here investigate this distribution and establish a functional relation between the dynamical state of the system and the variance of correlations as a function of cortical distance. The former is characterized by the spectral radius, a measure for how strong a signal is damped while traversing the network. To this end, we develop a theory that captures the heterogeneity of correlations across neurons. Technically, we derive a mean-field theory that assumes the distribution of correlations to be self-averaging; i.e. the same in any realization of the random network. This is possible by taking advantage of the symmetry of the disorder-averaged [4] effective connectivity matrix. We here demonstrate that spatially organized, balanced network models predict rich pairwise correlation structures with spatial extent far beyond the range of direct connections [5]. Massively parallel spike recordings of macaque motor cortex quantitatively confirm this prediction. We show that the range of these correlations depends on the spectral radius, which offers a potential dynamical mechanism to control the spatial range on which neurons cooperatively perform computations.
- Published
- 2019
- Full Text
- View/download PDF
11. Renormalization Group for Spatially Extended Neuronal Networks
- Author
-
Stapmanns, Jonas, Kühn, Tobias, Dahmen, David, Luu, Thomas, Honerkamp, Carsten, and Helias, Moritz
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems - Abstract
Many phenomena observed in biological neural networks can only be explained by assuming nonlinearinteractions. Due to effects like synaptic failure and channel noise, neuronal dynamics is also inherentlystochastic. The investigation of the interaction of both of these properties is challenging because due to thenonlinearity, correlations of higher order influence those of lower order. Nonlinear, stochastic systems exhibita plethora of dynamical states. The cortex, especially, is often suggested to operate close to a critical pointat which linear response theory fails since the neural dynamics is dominated by large fluctuations on alllength scales [1].This is the realm of the Renormalization Group (RG), stemming from statistical and particle physics. Weuse this technique in the form introduced by Kenneth G. Wilson [2] to study a two-dimensional stochasticneural field model. Its connectivity is composed of two Gaussian kernels, one mimicking the excitatoryand the other the inhibitory input. Its dynamics is given by a nonlinear gain and intrinsic nonlinear neurondynamics. Gaussian white noise accounting for unspecified external input and intrinsic stochasticity drivesour system. In its long-distance approximation, this model is similar to that proposed by Kardar, Parisi, andZhang (KPZ) [3].Along the lines taken in their approach, we derive RG-flow equations describing the couplings in ourneural field model on different length scales. From this, one finds the upper critical dimension dc=2,which corresponds to the dimension of networks in cortex. Above dc , mean-field theory is exact as theGaussian fixed point is attractive for small interactions, whereas below dc , the interaction dominates thebehavior. For d=dc , however, we find that the Gaussian fixed point becomes unstable and the interactionparameter flows into a strong coupling regime – similar to the KPZ model. A strong coupling fixed pointmay be present, which would indicate self-similarity - the signature of a critical state. Our analysis thereforeimplies certain constrains on the architecture of the neural network (within our model) if it is supposed towork at a critical point. For example, we conclude that we get stable dynamics only if the excitatoryinputs extend wider than the inhibitory ones.AcknowledgementsPartly supported by seed funds MSCALE and CLS002 of the RWTH University; the JARA Center for Doctoral studies within the graduate School for Simulation and Data Science (SSD); the Helmholtz association: Young investigator's grant VH-NG-1028; EU-Grant 785907 (HBP).References[1] Beggs, J. M., Plenz, D. (2003), Neuronal avalanches in neocortical circuits, Journal of Neuroscience, 23, 35, 11167–11177., 10.1523/JNEUROSCI.23-35-11167.2003[2] Wilson, K. G., Kogut, J. (1974), The renormalization group and the ε expansion, Physics Reports, 12, 2 , 75 - 199.[3] Kardar, M., Parisi, G., Zhang, Y.(1986), Dynamic Scaling of Growing Interfaces, Phys. Rev. Lett. 56, 889–892., 10.1103/PhysRevLett.56.88
- Published
- 2019
12. A bridge from large deviation theory to statistical field theory
- Author
-
Van Meegen, Alexander, Kühn, Tobias, and Helias, Moritz
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems - Published
- 2019
13. Distributed representations and learning in neuronal networks
- Author
-
Gilson, Matthieu, Dahmen, David, Moreno-Bote, Ruben, Insabato, Andrea, and Helias, Moritz
- Subjects
Computational Neuroscience ,Learning, plasticity and memory - Published
- 2019
14. A second type of criticality in the brain uncovers rich multiple-neuron dynamics
- Author
-
Dahmen, David, Deutz, Lukas, Grün, Sonja, Diesmann, Markus, and Helias, Moritz
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems - Published
- 2018
- Full Text
- View/download PDF
15. Quantitative comparison of a mesocircuit model with resting state activity in the macaque motor cortex
- Author
-
Von Papen, Michael, Voges, Nicole, Dabrowska, Paulina, Senk, Johanna, Hagen, Espen, Diesmann, Markus, Dahmen, David, Deutz, Lukas, Helias, Moritz, Brochier, Thomas, Riehle, Alexa, and Grün, Sonja
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems - Published
- 2018
- Full Text
- View/download PDF
16. Field Theory for Nonlinear Stochastic Rate Neurons
- Author
-
Stapmanns, Jonas, Kühn, Tobias, Dahmen, David, Honerkamp, Carsten, and Helias, Moritz
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems - Published
- 2018
- Full Text
- View/download PDF
17. Extremely scalable spiking neuronal network simulation code: From laptops to exascale computers
- Author
-
Jordan, Jakob, Ippen, Tammo, Helias, Moritz, Kitayama, Itaru, Sato, Mitsuhisa, Igarashi, Jun, Diesmann, Markus, and Kunkel, Susanne
- Subjects
spiking neuronal network ,Quantitative Biology::Neurons and Cognition ,parallel computing ,large-scale simulation ,Biomedical Engineering ,Neuroscience (miscellaneous) ,Computer Science Applications ,exascale computing ,supercomputer ,ddc:610 ,Neuroscience ,Original Research ,computational neuroscience - Abstract
Frontiers in neuroinformatics 12, 2 (2018). doi:10.3389/fninf.2018.00002, Published by Frontiers Research Foundation, Lausanne
- Published
- 2018
18. A diagrammatic derivation of the TAP-approximation
- Author
-
Kühn, Tobias and Helias, Moritz
- Subjects
Computational Neuroscience ,Data analysis, machine learning, neuroinformatics - Abstract
Originally invented to describe magnetism, the Ising model has proven to be useful in many other applications, as, for example, inference problems in computer science, socioeconomic physics, the analysis of neural data [1,2,3] and modeling of neural networks (binary neurons). Despite its simplicity, there exists no general solution to the Ising model, i.e. the partition function is unknown in the case of an interacting system. Mean field theory is often used as an approximation being exact in the noninteracting case and for infinite dimensions. A correction term to the mean field approximation of Gibb's free energy (the effective action) of the Ising model was given by Thouless, Anderson and Palmer (TAP) [4] as a “fait accompli” and was later derived by different methods in [5,6,7], where also higher order terms were computed.We present a diagrammatic derivation (Feynman diagrams) of these correction terms and embed the problem in the language of field theory. Furthermore, we show how the iterative construction of the effective action used in the Ising case generalizes to arbitrary non-Gaussian theories.References[1] Tkacik, G., Schneidman, E., Berry II, M. J., Bialek, W. (2008): Ising models for networks of real neurons. arXiv:q-bio/0611072[2] Roudi, Y., Tyrcha, J. and Hertz, J.A. (2009): Ising model for neural data: Model quality and approximate methods for extracting functional connectivity. Phys. Rev. E 79, 051915[3] Hertz, J.A., Roudi, Y. and Tyrcha, J (2011): Ising models for inferring network structure from spike data. arXiv:1106.1752.[4] Thouless, D.J., Anderson, P.W. and Palmer, R.G. (1977): Solution of ’Solvable model of a spin glass’. Phil. Mag. 35 3, 593 – 601[5] Georges, A. and Yedidia, J.S. (1991): How to expand around mean-field theory using high-temperature expansions. J. Phys. A 24, 2173 – 2192[6] Parisi, G. and Potters, M. (1995): Mean-Field equations for spin models with orthogonal interaction matrices. J. Phys. A 28, 5267 – 5285[7] Tanaka, T. (2000): Information Geometry of Mean-Field Approximation. Neur. Comp. 12, 1951-1968.Acknowledgements. This work was partially supported by HGF young investigator’s group VH-NG-1028, Helmholtz portfolio theme SMHB, Juelich Aachen Research Alliance (JARA), and EU Grant 604102 (Human Brain Project, HBP).
- Published
- 2017
- Full Text
- View/download PDF
19. Distributed correlations in motor cortex suggest virtually unstable linearized dynamics
- Author
-
Dahmen, David, Diesmann, Markus, and Helias, Moritz
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems ,Quantitative Biology::Neurons and Cognition - Abstract
Despite the large amount of shared input between nearby neurons in cortical circuits, massively parallel spiking recordings of various in vivo networks exhibit pairwise covariances in ensembles of neuronal spike trains that are on average close to zero [1]. The low average has been well understood in terms of active decorrelation by inhibitory feedback [2,3] in networks that operate far away from the critical point, which marks the onset of avalanche-like activity [4]. Experiments, however, also show large variability of covariances across pairs of neurons. An explanation for their wide distribution in relation to the static (quenched) disorder of the connectivity in recurrent networks is so far elusive.Here we combine ideas from spin-glass theory [5] with a generating function representation for the joint probability distribution of the network activity [6] to derive a finite-size mean-field theory that reduces a disordered to a highly symmetric network with fluctuating auxiliary fields (Fig. 1). The theory relates the statistics of covariances to the statistics of connections, in particular the largest eigenvalue of the connectivity matrix, and explains the experimentally observed covariance distributions [7]. The analytical expressions expose that both, average and dispersion of the latter, diverge at a critical point which has been studied in terms of a transition from regular to chaotic dynamics [8,9,10]. This critical point does not arise from net excitation, but rather from disorder in networks with balanced excitation and inhibition. Applying these results to recordings from motor cortex suggests its operation close to this breakdown of linear stability. References:1. Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK: Decorrelated Neuronal Firing in Cortical Microcircuits. Science 2010, 327:584-587.2. Renart A, De La Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD: The asynchronous State in Cortical Circuits. Science 2010, 327:587-590.3. Tetzlaff T, Helias M, Einevoll G, Diesmann M: Decorrelation of neural-network activity by inhibitory feedback. PLOS Comput. Biol. 2010, 8(8):e1002596.4. Beggs JM, Plenz D: Neuronal avalanches in neocortical circuits. J. Neurosci. 2003, 23:11167-11177.5. Sompolinsky H, Zippelius A: Relaxational dynamics of the Edwards-Anderson model and the mean-field theory of spin-glasses. Phys. Rev. B 1982, 25:6860-6875.6. Chow C, Buice M: Path Integral Methods for Stochastic Differential Equations. J Math Neurosci. 2015, 5:8. 7. Dahmen D, Diesmann M, Helias M: Distributions of covariances as a window into the operational regime of neuronal networks. arXiv 2016, 1605.04153 [cond-mat.dis-nn].8. Sompolinsky H, Crisanti A, Sommers HJ: Chaos in Random Neural Networks, Phys. Rev. Lett. 1988, 61:259-262.
- Published
- 2017
20. Perfect detection of spikes via time-reversal
- Author
-
Krishnan, Jeyashree, Porta Mana, P.G.L., Helias, Moritz, Diesmann, Markus, and Di Napoli, Edoardo
- Subjects
Computational Neuroscience ,Bernstein Conference - Published
- 2016
- Full Text
- View/download PDF
21. Transition to chaos and short-term memory in driven random neural networks
- Author
-
Schuecker, Jannis, Goedeke, Sven, and Helias, Moritz
- Subjects
Computational Neuroscience ,Neurons, networks, dynamical systems - Published
- 2016
22. How does an oscillatory drive shape the correlations in binary networks?
- Author
-
Kühn, Tobias, Denker, Michael, PortaMana, PierGianLuca, Grün, Sonja, and Helias, Moritz
- Subjects
Computational Neuroscience ,Quantitative Biology::Neurons and Cognition ,Bernstein Conference - Abstract
Two important parts of electrophysiological recordings are the spike times and the local field potential (LFP), which is considered to primarily reflect input activity. In [1], it was shown by unitary event analysis [2,3] that excess synchronous spike events are locked to the phase of LFP beta-oscillations more strongly than spikes not part of such events. Denker et al. proved by a statistical model that this finding could be explained by the existence of cell assemblies, i.e. groups of (excitatory) neurons that are more strongly connected amongst each other than to the rest of the network.To study the influence of the LFP on the correlated single neuron activities first for a simple model capturing the main properties of cortical neural networks, we examine a balanced network of homogeneously connected binary model neurons [4] receiving input from a sinusoidal perturbation [5]. The Glauber dynamics of the network is simulated and approximated by mean-field theory. Treating the periodic input in linear response theory, the cyclostationary first two moments are analytically computed, which agree with their simulated counterparts over a wide parameter range. The deviations of the zero-time lag correlations from their stationary values consist of two summands owing to the modulated susceptibility (one via direct modulation, one via modulated mean activity) and one to the driving of the autocorrelations. For some parameters, this leads to resonant correlations and non-resonant mean activities. Our results can help to answer the question how oscillations in mesoscopic signals and spike correlations interact. As a next step, our model could be extended to include cell assemblies [6], which will allow us to compare our results with the experimental findings more closely.figure caption:A: Contributions to the time-dependent variation of the correlations in linear perturbation theory. B: The deviation of the correlations from their stationary value is maximal for a certain frequency even for this setting with a connectivity matrix having solely purely real eigenvalues. References:[1] Denker M., Cerebral Cortex, 21:2681--2695, 2011, The Local Field Potential Reflects Surplus Spike Synchrony. [2] Grün S, Diesmann M, Aertsen A. Neural Comput., 14:43--80, 2002a, Unitary events in multiple single-neuron spiking activity: I. detection and significance. [3] Grün S, Diesmann M, Aertsen A. Neural Comput., 14:81--119, 2002b, Unitary events in multiple single-neuron spiking activity: II. Nonstationary data. [4] Ginzburg I., Sompolinsky H., Phys. Rev. E 50(4):3171--3191, 1994, Theory of correlations in stochastic neural networks.[5] Kühn T., Helias M., arXiv:1607.08552, 2016, Correlated activity of periodically driven binary networks.[6] Litwin-Kumar A., & Doiron B., Nature Neur., 15(11):1498--1505, 2012, Slow dynamics and high variability in balanced cortical networks with clustered connections.
- Published
- 2016
23. Identifying critical components in the structure of multi-scale neural networks
- Author
-
Schuecker*, Jannis, Schmidt*, Maximilian, Van Albada, Sacha, Diesmann, Markus, and Helias, Moritz
- Subjects
Computational Neuroscience ,Bernstein Conference - Abstract
One of the major challenges of computational neuroscience is the integration of available multi-scale experimental data into coherent models of the brain. Building a link between structure and activity, such a model would serve as a tool to identify the mechanisms giving rise to experimental observations. However, even if a model integrates experimental data on brain anatomy to the best of our knowledge, it will be underconstrained. Therefore, physiological observations should guide the exploration of the model's parameter ranges within the uncertainty of the anatomical data. To this end, we here use a mean-field reduction of spiking network dynamics to investigate the mechanisms that give rise to stable and physiologically realistic activity, which often remain unclear from simulation results alone. This reduction allows us to include activity constraints to shape the phase space of large-scale network models. In particular, we apply the theory to a spiking multi-area model of macaque visual cortex (Schmidt et al., 2013) and increase its fixed-point firing rates to a realistic level while preserving the basin of attraction of this fixed point. To achieve this, we control the location of the separatrix dividing the phase space into realistic low-activity behavior and unrealistic high-activity states (Fig. 1). We identify components on the macroscopic level of cortical areas and on the layer-specific population level that are critical for the stability properties of the network. In particular, we find a subcircuit of two frontal areas, 46 and FEF, which crucially influence the network stability. Moreover, we identify connections to the excitatory population of layer 5 to be critical for the network dynamics, in line with experimental findings (Sanchez-Vives et al., 2000; Beltramo et al., 2013). By systematically refining these connections to a small degree, we obtain realistic layer- and area-specific firing rates in the visual cortex model.
- Published
- 2015
24. Spiking network simulation code for petascale computers
- Author
-
Kunkel, Susanne, Schmidt, Maximilian, Eppler, Jochen M, Plesser, Hans Ekkehard, Masumoto, Gen, Igarashi, Jun, Ishii, Shin, Fukai, Tomoki, Morrison, Abigail, Diesmann, Markus, and Helias, Moritz
- Subjects
Quantitative Biology::Neurons and Cognition ,parallel computing ,large-scale simulation ,memory footprint ,supercomputer ,metaprogramming ,ddc:610 ,Original Research Article ,memory management ,Neuroscience ,computational neuroscience - Abstract
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
- Published
- 2014
- Full Text
- View/download PDF
25. Activity propagation in plastic feed-forward networks of nonlinear neurons near to criticality
- Author
-
Grytskyy, Dmytro, Diesmann, Markus, and Helias, Moritz
- Subjects
Computational Neuroscience ,Bernstein Conference - Published
- 2014
- Full Text
- View/download PDF
26. One-to-one relationship between effective connectivity and correlations in asynchronous networks
- Author
-
van Albada, Sacha, Helias, Moritz, and Diesmann, Markus
- Subjects
Computational Neuroscience ,Bernstein Conference - Published
- 2014
27. Identifying anatomical circuits causing population rate oscillations in structured integrate-and-fire networks
- Author
-
Bos, Hannah and Helias, Moritz
- Subjects
Computational Neuroscience ,Quantitative Biology::Neurons and Cognition ,Bernstein Conference - Abstract
Fast oscillations of the population firing rate in the gamma range (50-200 Hz), where each individual neuron fires irregularly with a low rate, have been observed in networks of simulated leaky integrate-and-fire (LIF) neurons as well as in population signals in the living brain.An analytical explanation of oscillatory population rates has been given in [1]. However a systematic approach identifying circuits responsible for specific oscillations in a potentially complicated structured network of populations of neurons is currently not available. Such a method would provide a tool for the identification of connections responsible for the emergence and propagation of oscillations in anatomically constrained networks and shed light on local dynamical mechanisms amplifying or suppressing certain frequencies . It could also be used to design a LIF-network with desired spectral features.In this study we consider the spectra of population firing rates produced by networks with population specific connectivity. The populations are composed of randomly connected LIF-neurons. In our analysis we make use of the formalism developed in [2], who recently closed the gap between the descriptions of linear rate models and LIF-neurons. We derive an expression for the effective connectivity matrix which incorporates the anatomical (synaptic weights, in-degrees) as well as the dynamical properties [3] of the circuit. We are able to predict the spectra of the population firing rates for any connectivity pattern that allows for asynchronous activity. The analytically obtained predictions are validated by comparison with simulation results of a multi-layered cortical network model [4].Decomposing the effective connectivity matrix at a frequency where the spectrum shows a peak reveals the dominating eigenmode. By reconstructing the anatomical and dynamical contributions to this eigenmode we can narrow down the physical connections associated with the peak in the power spectrum.
- Published
- 2014
28. NEST: Highly scalable simulation technology from laptops to supercomputers
- Author
-
Kunkel, Susanne, Schmidt, Maximilian, Eppler, Jochen Martin, Plesser, Hans Ekkehard, Igarashi, Jun, Masumoto, Gen, Fukai, Tomoki, Ishii, Shin, Morrison, Abigail, Diesmann, Markus, and Helias, Moritz
- Subjects
Computational Neuroscience ,Bernstein Conference - Published
- 2013
- Full Text
- View/download PDF
29. Static and dynamic mean-field theory of a layered macroscopic network model
- Author
-
Schuecker, Jannis, Helias, Moritz, and Diesmann, Markus
- Subjects
Computational Neuroscience ,Bernstein Conference - Published
- 2013
- Full Text
- View/download PDF
30. Calcium current makes LIF neuron with conductance synapses a better coincidence detector
- Author
-
Chua, Yansong, Helias, Moritz, and Morrison, Abigail
- Subjects
Computational Neuroscience ,Bernstein Conference - Published
- 2013
- Full Text
- View/download PDF
31. Connectivity reconstruction from complete or partially known covariances in the asynchronous irregular regime
- Author
-
Grytskyy, Dmytro, Diesmann, Markus, and Helias, Moritz
- Subjects
Computational Neuroscience ,Bernstein Conference - Published
- 2013
32. Fundamental Activity Constraints Lead to Specific Interpretations of the Connectome.
- Author
-
Schuecker, Jannis, Schmidt, Maximilian, van Albada, Sacha J., Diesmann, Markus, and Helias, Moritz
- Subjects
NEUROSCIENCES ,NEURONS ,MULTISCALE modeling ,MEDICAL sciences ,NERVOUS system - Abstract
The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Corrigendum: Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
- Author
-
Jordan, Jakob, Ippen, Tammo, Helias, Moritz, Kitayama, Itaru, Sato, Mitsuhisa, Igarashi, Jun, Diesmann, Markus, and Kunkel, Susanne
- Subjects
SUPERCOMPUTERS ,COMPUTATIONAL neuroscience ,PARALLEL computers - Published
- 2018
- Full Text
- View/download PDF
34. The Correlation Structure of Local Neuronal Networks Intrinsically Results from Recurrent Dynamics.
- Author
-
Helias, Moritz, Tetzlaff, Tom, and Diesmann, Markus
- Subjects
- *
BIOLOGICAL neural networks , *NEURONS , *NEUROPLASTICITY , *PHYSIOLOGICAL control systems , *BIOLOGICAL networks , *COMPUTER simulation of biological systems , *BRAIN models , *COMPUTATIONAL neuroscience , *NEUROSCIENCES - Abstract
Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
35. Supercomputers ready for use as discovery machines for neuroscience.
- Author
-
Helias, Moritz, Kunkel, Susanne, Masumoto, Gen, Igarashi, Jun, Eppler, Jochen Martin, Ishii, Shin, Fukai, Tomoki, Morrison, Abigail, and Diesmann, Markus
- Subjects
SUPERCOMPUTERS ,ARTIFICIAL neural networks ,MEMORY ,MATHEMATICAL models ,SIMULATION methods & models ,COMPUTATIONAL neuroscience - Abstract
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10
8 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. [ABSTRACT FROM AUTHOR]- Published
- 2012
- Full Text
- View/download PDF
36. The Poisson process with dead time captures important statistical features of neural activity.
- Author
-
Deger, Moritz, Cardanobile, Stefano, Helias, Moritz, and Rotter, Stefan
- Subjects
POSTER presentations ,POISSON processes ,STOCHASTIC processes ,COMPUTATIONAL neuroscience ,NEURONS ,COMPUTER simulation ,APPROXIMATION theory - Abstract
Poster presentation Stochastic point processes are widely used in computational neuroscience to model the spiking of single neurons and neuronal populations. The choice of a particular point process is critical for statistical measures of neural activity and has impact on the subthreshold dynamics of neuron models. Here we show that the Poisson process with dead time, a particular simple point process, captures important features of the spiking statistics of neurons (Fig. 1). On the level of single neurons, we apply a step change to the rate of a Poisson process with dead time, keeping the dead time constant. The expected PSTH is computed by numerically solving the partial differential equation of the corresponding non-homogeneous renewal process and we also give an analytical approximation. We observe a very sharp transient in the firing-rate (Fig. 2) that resembles experimental results of. On the level of neuronal populations, we employ the superposition of many Poisson processes with dead time as a model of the population activity in a network. We compute the explicit form of the inter-spike-interval (ISI) distribution and the coefficient of variation for superimposed processes and compare them to direct simulations. The ISIs of the superimposed spike trains show negative serial correlations that correspond to those we observe in population recordings of simulated integrate-and-fire neurons (Fig 3). For the single Poisson process with dead time and superpositions alike, we can determine the variance of shot noise driven by them with the associated spike count in a certain time window or the free membrane potential of an IF neuron. This enables us to show how empirical approximations of the Fano factor depend on the width of the counting window, and how the statistical properties of the driving point-process influence the variance of the subthreshold dynamics of neurons. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.