Back to Search
Start Over
Learning to represent signals spike by spike.
- Source :
-
PLoS Computational Biology . 3/16/2020, Vol. 16 Issue 3, p1-23. 23p. 1 Diagram, 7 Graphs. - Publication Year :
- 2020
-
Abstract
- Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Use coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks' convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations. Author summary: Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning as typically observed in cortex. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. In this study, we show how the required architecture can be learnt. We derive local and biophysically plausible learning rules for a recurrent neural networks from first principles. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal. [ABSTRACT FROM AUTHOR]
- Subjects :
- *RECURRENT neural networks
*DISTRIBUTION costs
Subjects
Details
- Language :
- English
- ISSN :
- 1553734X
- Volume :
- 16
- Issue :
- 3
- Database :
- Academic Search Index
- Journal :
- PLoS Computational Biology
- Publication Type :
- Academic Journal
- Accession number :
- 142276070
- Full Text :
- https://doi.org/10.1371/journal.pcbi.1007692