1. Multitask learning of a biophysically-detailed neuron model.
- Author
-
Verhellen, Jonas, Beshkov, Kosio, Amundsen, Sebastian, Ness, Torbjørn V., and Einevoll, Gaute T.
- Subjects
- *
MEMBRANE potential , *NEURAL circuitry , *ARTIFICIAL neural networks , *DIFFERENTIAL equations , *MAGNETOENCEPHALOGRAPHY , *COMPUTATIONAL neuroscience , *ELECTROENCEPHALOGRAPHY - Abstract
The human brain operates at multiple levels, from molecules to circuits, and understanding these complex processes requires integrated research efforts. Simulating biophysically-detailed neuron models is a computationally expensive but effective method for studying local neural circuits. Recent innovations have shown that artificial neural networks (ANNs) can accurately predict the behavior of these detailed models in terms of spikes, electrical potentials, and optical readouts. While these methods have the potential to accelerate large network simulations by several orders of magnitude compared to conventional differential equation based modelling, they currently only predict voltage outputs for the soma or a select few neuron compartments. Our novel approach, based on enhanced state-of-the-art architectures for multitask learning (MTL), allows for the simultaneous prediction of membrane potentials in each compartment of a neuron model, at a speed of up to two orders of magnitude faster than classical simulation methods. By predicting all membrane potentials together, our approach not only allows for comparison of model output with a wider range of experimental recordings (patch-electrode, voltage-sensitive dye imaging), it also provides the first stepping stone towards predicting local field potentials (LFPs), electroencephalogram (EEG) signals, and magnetoencephalography (MEG) signals from ANN-based simulations. While LFP and EEG are an important downstream application, the main focus of this paper lies in predicting dendritic voltages within each compartment to capture the entire electrophysiology of a biophysically-detailed neuron model. It further presents a challenging benchmark for MTL architectures due to the large amount of data involved, the presence of correlations between neighbouring compartments, and the non-Gaussian distribution of membrane potentials. Author summary: Our research focuses on cutting-edge techniques in computational neuroscience. We specifically make use of simulations of biophysically detailed neuron models. Traditionally these methods are computationally intensive, but recent advancements using artificial neural networks (ANNs) have shown promise in predicting neural behavior with remarkable accuracy. However, existing ANNs fall short in providing comprehensive predictions across all compartments of a neuron model and only provide information on the activity of a limited number of locations along the extent of a neuron. In our study, we introduce a novel approach leveraging state-of-the-art multitask learning architectures. This approach allows us to simultaneously predict membrane potentials in every compartment of a neuron model. By distilling the underlying electrophysiology into an ANN, we significantly outpace conventional simulation methods. By accurately capturing voltage outputs across the neuron's structure, our method invites comparisons with experimental data and paves the way for predicting complex aggregate signals such as local field potentials and EEG signals. Our findings not only advance our understanding of neural dynamics but also present a significant benchmark for future research in computational neuroscience. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF