Back to Search
Start Over
Self-Supervised learning with cross-modal transformers for emotion recognition
- Source :
- SLT
- Publication Year :
- 2020
- Publisher :
- arXiv, 2020.
-
Abstract
- Emotion recognition is a challenging task due to limited availability of in-the-wild labeled datasets. Self-supervised learning has shown improvements on tasks with limited labeled datasets in domains like speech and natural language. Models such as BERT learn to incorporate context in word embeddings, which translates to improved performance in downstream tasks like question answering. In this work, we extend self-supervised training to multi-modal applications. We learn multi-modal representations using a transformer trained on the masked language modeling task with audio, visual and text features. This model is fine-tuned on the downstream task of emotion recognition. Our results on the CMU-MOSEI dataset show that this pre-training technique can improve the emotion recognition performance by up to 3% compared to the baseline.<br />Comment: To appear in SLT2020
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
0303 health sciences
Computer Science - Computation and Language
Computer science
Speech recognition
Context (language use)
010501 environmental sciences
01 natural sciences
Task (project management)
Visualization
Machine Learning (cs.LG)
03 medical and health sciences
ComputingMethodologies_PATTERNRECOGNITION
Question answering
Task analysis
Language model
Computation and Language (cs.CL)
Natural language
030304 developmental biology
0105 earth and related environmental sciences
Transformer (machine learning model)
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- SLT
- Accession number :
- edsair.doi.dedup.....4dfae151938648aa227234de1d16ebf0
- Full Text :
- https://doi.org/10.48550/arxiv.2011.10652