Back to Search Start Over

Past Word Context Enables Better MEG Encoding Predictions than Current Word in Listening Stories

Past Word Context Enables Better MEG Encoding Predictions than Current Word in Listening Stories

Authors :
Reddy Oota, Subba
Trouvain, Nathan
Alexandre, Frédéric
Hinaut, Xavier
Laboratoire Bordelais de Recherche en Informatique (LaBRI)
Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)
Mnemonic Synergy (Mnemosyne)
Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Inria Bordeaux - Sud-Ouest
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut des Maladies Neurodégénératives [Bordeaux] (IMN)
Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS)
PhD Inria CORDI-S NewSpeak
Source :
NeuroFrance 2023, NeuroFrance 2023, May 2023, Lyon, France.
Publication Year :
2023
Publisher :
HAL CCSD, 2023.

Abstract

International audience; Brain encoding is the process of mapping stimuli to brain activity. There is a vast literature on linguistic brain encoding for functional MRI (fMRI) related to syntactic and semantic representations. Magnetoencephalography (MEG), with higher temporal resolution than fMRI, enables us to look more precisely at the timing of linguistic feature processing. Unlike MEG decoding, few studies on MEG encoding using natural stimuli exist. Existing ones on story listening focus on phoneme and simple word-based features, ignoring more abstract features such as context, syntactic and semantic aspects. Inspired by previous fMRI studies, we study MEG brain encoding using basic syntactic and semantic features, with various context lengths and directions (past vs. future), for a dataset of 8 subjects listening to stories (Gwilliams et al. arXiv 2022). We find that BERT representations are significant but not other syntactic features or word embeddings (e.g. GloVe), allowing us to encode MEG in a distributed way across auditory and language regions in time. In particular, past context is crucial in obtaining significant results.This suggests that the "word encoding center of mass" is few words behind the current word, as if the brain would wait for more future context before encoding "fully" the word, or similarly that the current representation of the incoming word is encoded in a transient representation that is changing until the next words come in. This is coherent with previous studies that showed that the several past phonemes information (position-invariant code for content and order) are kept in memory (Gwilliams et al. 2022), and that current incoming word lexical information is retrieved in a context sensitive manner (rather then using the most probable lexical category of the word) (Gwilliams et al. 2023). HVC are of canaries, the brain area managing long-time dependencies in song production, preferentially encodes past actions rather than future actions: specific neuron populations preferencially encoding past actions were actually more active during the rare phrases that involve history-dependent transitions in song (Cohen et al. 2020). This is also coherent with the results of (Gwilliams et al. 2022) where phoneme representations are sustained longer when lexical identity is uncertain. Overall, it seems that the representations of past events or actions are kept in memory until they have been used to disambiguate future events/actions.

Details

Language :
English
Database :
OpenAIRE
Journal :
NeuroFrance 2023, NeuroFrance 2023, May 2023, Lyon, France.
Accession number :
edsair.od.......165..39bebf11700a0dad38d1dd2e228582f6