1. Facial Action Unit Detection and Intensity Estimation From Self-Supervised Representation.
- Author
-
Ma, Bowen, An, Rudong, Zhang, Wei, Ding, Yu, Zhao, Zeng, Zhang, Rongsheng, Lv, Tangjie, Fan, Changjie, and Hu, Zhipeng
- Abstract
As a fine-grained and local expression behavior measurement, facial action unit (FAU) analysis (e.g., detection and intensity estimation) has been documented for its time-consuming, labor-intensive, and error-prone annotation. Thus a long-standing challenge of FAU analysis arises from the data scarcity of manual annotations, limiting the generalization ability of trained models to a large extent. Amounts of previous works have made efforts to alleviate this issue via semi/weakly supervised methods and extra auxiliary information. However, these methods still require domain knowledge and have not yet avoided the high dependency on data annotation. This article introduces a robust facial representation model MAE-Face for AU analysis. Using masked autoencoding as the self-supervised pre-training approach, MAE-Face first learns a high-capacity model from a feasible collection of face images without additional data annotations. Then after being fine-tuned on AU datasets, MAE-Face exhibits convincing performance for both AU detection and AU intensity estimation, achieving a new state-of-the-art on nearly all the evaluation results. Further investigation shows that MAE-Face achieves decent performance even when fine-tuned on only 1% of the AU training set, strongly proving its robustness and generalization performance. The pre-trained model is available at our GitHub repository. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF