Back to Search Start Over

Does Visual Self-Supervision Improve Learning of Speech Representations for Emotion Recognition?

Authors :
Maja Pantic
Stavros Petridis
Abhinav Shukla
Source :
IEEE Transactions on Affective Computing. 14:406-420
Publication Year :
2023
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2023.

Abstract

Self-supervised learning has attracted plenty of recent research interest. However, most works for self-supervision in speech are typically unimodal and there has been limited work that studies the interaction between audio and visual modalities for cross-modal self-supervision. This work (1) investigates visual self-supervision via face reconstruction to guide the learning of audio representations; (2) proposes an audio-only self-supervision approach for speech representation learning; (3) shows that a multi-task combination of the proposed visual and audio self-supervision is beneficial for learning richer features that are more robust in noisy conditions; (4) shows that self-supervised pretraining can outperform fully supervised training and is especially useful to prevent overfitting on smaller sized datasets. We evaluate our learned audio representations for discrete emotion recognition, continuous affect recognition and automatic speech recognition. We outperform existing self-supervised methods for all tested downstream tasks. Our results demonstrate the potential of visual self-supervision for audio feature learning and suggest that joint visual and audio self-supervision leads to more informative audio representations for speech and emotion recognition.<br />Accepted for publication in IEEE Transactions on Affective Computing; v3: Publication-ready version including additional experiments and discussion

Details

ISSN :
23719850
Volume :
14
Database :
OpenAIRE
Journal :
IEEE Transactions on Affective Computing
Accession number :
edsair.doi.dedup.....5b0016496a9cfdaeec6dacc3ef092352
Full Text :
https://doi.org/10.1109/taffc.2021.3062406