1. Interrogating theoretical models of neural computation with emergent property inference
- Author
-
John P. Cunningham, Carlos D. Brody, Chunyu A. Duan, Sean R. Bittner, Alex T. Piet, Agostina Palmigiano, and Kenneth D. Miller
- Subjects
0301 basic medicine ,Property (programming) ,QH301-705.5 ,Computer science ,Science ,Population ,Models, Neurological ,Inference ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,Models of neural computation ,circuit models ,None ,theoretical neuroscience ,Biology (General) ,education ,030304 developmental biology ,Parametric statistics ,Visual Cortex ,0303 health sciences ,education.field_of_study ,Computational neuroscience ,Models, Statistical ,General Immunology and Microbiology ,Quantitative Biology::Neurons and Cognition ,business.industry ,General Neuroscience ,Deep learning ,Probabilistic logic ,deep learning ,Computational Biology ,General Medicine ,Inverse problem ,030104 developmental biology ,Recurrent neural network ,Medicine ,Artificial intelligence ,Neural Networks, Computer ,business ,030217 neurology & neurosurgery ,Research Article ,Computational and Systems Biology ,Neuroscience - Abstract
1AbstractA cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon – whether behavioral or a pattern of neural activity – and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choice of model parameters. A key step is then to identify the model parameters consistent with observed phenomena: to solve the inverse problem. In this work, we present a novel technique, emergent property inference (EPI), that brings the modern probabilistic modeling toolkit to theoretical neuroscience. When theorizing circuit models, theoreticians predominantly focus on reproducing computational properties rather than a particular dataset. Our method uses deep neural networks to learn parameter distributions with these computational properties. This methodology is introduced through a motivational example inferring conductance parameters in a circuit model of the stomatogastric ganglion. Then, with recurrent neural networks of increasing size, we show that EPI allows precise control over the behavior of inferred parameters, and that EPI scales better in parameter dimension than alternative techniques. In the remainder of this work, we present novel theoretical findings gained through the examination of complex parametric structure captured by EPI. In a model of primary visual cortex, we discovered how connectivity with multiple inhibitory subtypes shapes variability in the excitatory population. Finally, in a model of superior colliculus, we identified and characterized two distinct regimes of connectivity that facilitate switching between opposite tasks amidst interleaved trials, characterized each regime via insights afforded by EPI, and found conditions where these circuit models reproduce results from optogenetic silencing experiments. Beyond its scientific contribution, this work illustrates the variety of analyses possible once deep learning is harnessed towards solving theoretical inverse problems.
- Published
- 2021