Back to Search Start Over

SVFAP: Self-supervised Video Facial Affect Perceiver

Authors :
Sun, Licai
Lian, Zheng
Wang, Kexin
He, Yu
Xu, Mingyu
Sun, Haiyang
Liu, Bin
Tao, Jianhua
Source :
IEEE Transactions on Affective Computing, 2024
Publication Year :
2023

Abstract

Video-based facial affect analysis has recently attracted increasing attention owing to its critical role in human-computer interaction. Previous studies mainly focus on developing various deep learning architectures and training them in a fully supervised manner. Although significant progress has been achieved by these supervised methods, the longstanding lack of large-scale high-quality labeled data severely hinders their further improvements. Motivated by the recent success of self-supervised learning in computer vision, this paper introduces a self-supervised approach, termed Self-supervised Video Facial Affect Perceiver (SVFAP), to address the dilemma faced by supervised methods. Specifically, SVFAP leverages masked facial video autoencoding to perform self-supervised pre-training on massive unlabeled facial videos. Considering that large spatiotemporal redundancy exists in facial videos, we propose a novel temporal pyramid and spatial bottleneck Transformer as the encoder of SVFAP, which not only largely reduces computational costs but also achieves excellent performance. To verify the effectiveness of our method, we conduct experiments on nine datasets spanning three downstream tasks, including dynamic facial expression recognition, dimensional emotion recognition, and personality recognition. Comprehensive results demonstrate that SVFAP can learn powerful affect-related representations via large-scale self-supervised pre-training and it significantly outperforms previous state-of-the-art methods on all datasets. Code is available at https://github.com/sunlicai/SVFAP.<br />Comment: Published in: IEEE Transactions on Affective Computing (Early Access). The code and models are available at https://github.com/sunlicai/SVFAP

Details

Database :
arXiv
Journal :
IEEE Transactions on Affective Computing, 2024
Publication Type :
Report
Accession number :
edsarx.2401.00416
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TAFFC.2024.3436913