1. Predictive coding of natural images by V1 activity revealed by self-supervised deep neural networks
- Author
-
Alina Peter, Martin Vinck, Pascal Fries, William Barnes, Cem Uran, Wolf Singer, Andreea Lazar, Rasmus Roese, Katharine A Shapcott, and Johanna Klon-Lipok
- Subjects
Spatial contextual awareness ,Artificial neural network ,Computer science ,Surround suppression ,business.industry ,media_common.quotation_subject ,Pattern recognition ,Context (language use) ,Stimulus (physiology) ,Receptive field ,Synchronization (computer science) ,Contrast (vision) ,Artificial intelligence ,business ,media_common - Abstract
Predictive coding is an important candidate theory of self-supervised learning in the brain. Its central idea is that neural activity results from an integration and comparison of bottom-up inputs with contextual predictions, a process in which firing rates and synchronization may play distinct roles. Here, we quantified stimulus predictability for natural images based on self-supervised, generative neural networks. When the precise pixel structure of a stimulus falling into the V1 receptive field (RF) was predicted by the spatial context, V1 exhibited characteristic γ-synchronization (30-80Hz), despite no detectable modulation of firing rates. In contrast to γ, β-synchronization emerged exclusively for unpredictable stimuli. Natural images with high structural predictability were characterized by high compressibility and low dimensionality. Yet, perceptual similarity was mainly determined by higher-level features of natural stimuli, not by the precise pixel structure. When higher-level features of the stimulus in the receptive field were predicted by the context, neurons showed a strong reduction in firing rates and an increase in surround suppression that was dissociated from synchronization patterns. These findings reveal distinct roles of synchronization and firing rates in the predictive coding of natural images.
- Published
- 2020