Back to Search
Start Over
Transformers and cortical waves: encoders for pulling in context across time.
- Source :
-
Trends in Neurosciences . Oct2024, Vol. 47 Issue 10, p788-802. 15p. - Publication Year :
- 2024
-
Abstract
- Transformer networks learn to predict long-range dependencies by concatenating input sequences into a long 'encoding vector'. Sensory inputs, however, arrive at the periphery one word and one fixation at a time, raising the question of how the sensory cortex could implement a similar computational principle while processing incoming inputs in real time. We suggest that a computational role we have previously identified for waves traveling over single regions of sensory cortex may subserve the same underlying computational principle as the transformers' 'encoding vector' to provide temporal context. Self-attention in transformers assigns association strengths between pairs of words that can be far apart in a sequence. Self-attention could be implemented on the whole-brain scale by interacting waves in the cortex and basal ganglia over a wide range of time scales. The capabilities of transformer networks such as ChatGPT and other large language models (LLMs) have captured the world's attention. The crucial computational mechanism underlying their performance relies on transforming a complete input sequence – for example, all the words in a sentence – into a long 'encoding vector' that allows transformers to learn long-range temporal dependencies in naturalistic sequences. Specifically, 'self-attention' applied to this encoding vector enhances temporal context in transformers by computing associations between pairs of words in the input sequence. We suggest that waves of neural activity traveling across single cortical areas, or multiple regions on the whole-brain scale, could implement a similar encoding principle. By encapsulating recent input history into a single spatial pattern at each moment in time, cortical waves may enable a temporal context to be extracted from sequences of sensory inputs, the same computational principle as that used in transformers. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 01662236
- Volume :
- 47
- Issue :
- 10
- Database :
- Academic Search Index
- Journal :
- Trends in Neurosciences
- Publication Type :
- Academic Journal
- Accession number :
- 180133981
- Full Text :
- https://doi.org/10.1016/j.tins.2024.08.006