Back to Search Start Over

Masked self‐supervised pre‐training model for EEG‐based emotion recognition.

Authors :
Hu, Xinrong
Chen, Yu
Yan, Jinlin
Wu, Yuan
Ding, Lei
Xu, Jin
Cheng, Jun
Source :
Computational Intelligence; Jun2024, Vol. 40 Issue 3, p1-26, 26p
Publication Year :
2024

Abstract

Electroencephalogram (EEG), as a tool capable of objectively recording brain electrical signals during emotional expression, has been extensively utilized. Current technology heavily relies on datasets, with its performance being limited by the size of the dataset and the accuracy of its annotations. At the same time, unsupervised learning and contrastive learning methods largely depend on the feature distribution within datasets, thus requiring training tailored to specific datasets for optimal results. However, the collection of EEG signals is influenced by factors such as equipment, settings, individuals, and experimental procedures, resulting in significant variability. Consequently, the effectiveness of models is heavily dependent on dataset collection efforts conducted under stringent objective conditions. To address these challenges, we introduce a novel approach: employing a self‐supervised pre‐training model, to process data across different datasets. This model is capable of operating effectively across multiple datasets. The model conducts self‐supervised pre‐training without the need for direct access to specific emotion category labels, enabling it to pre‐train and extract universally useful features without predefined downstream tasks. To tackle the issue of semantic expression confusion, we employed a masked prediction model that guides the model to generate richer semantic information through learning bidirectional feature combinations in sequence. Addressing challenges such as significant differences in data distribution, we introduced adaptive clustering techniques that manage by generating pseudo‐labels across multiple categories. The model is capable of enhancing the expression of hidden features in intermediate layers during the self‐supervised training process, enabling it to learn common hidden features across different datasets. This study, by constructing a hybrid dataset and conducting extensive experiments, demonstrated two key findings: (1) our model performs best on multiple evaluation metrics; (2) the model can effectively integrate critical features from different datasets, significantly enhancing the accuracy of emotion recognition. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08247935
Volume :
40
Issue :
3
Database :
Complementary Index
Journal :
Computational Intelligence
Publication Type :
Academic Journal
Accession number :
178049030
Full Text :
https://doi.org/10.1111/coin.12659