Back to Search Start Over

Learning two groups of discriminative features for micro-expression recognition.

Authors :
Wei, Jinsheng
Lu, Guanming
Yan, Jingjie
Zong, Yuan
Source :
Neurocomputing. Mar2022, Vol. 479, p22-36. 15p.
Publication Year :
2022

Abstract

• This work is the first one to learn discriminative features from two groups of features, and two groups of features have some differences, such as distribution and dimension. • Based on group sparse learning model, we propose a kernelized two-groups sparse learning (KTGSL) model to learn two groups of weights for two groups of features and select more discriminative features according the learned weights. • We propose two learning strategies to learn weights for KTGSL, and by adjusting two penalty coefficients, KTGSL can process two groups of features more flexibly to balance the differences. • Our work automatically determines the effective HSDGs and refines LBP-TOP to replace the previous work that manually determines single optimal HSDG by testing every HSDG in turn. • We eliminate the mechanized concatenation between LBP-TOP and the single HSDG and consider the interaction between two groups of features. As a branch of affective computing and machine learning, recognizing micro-expressions is more difficult than recognizing macro-expressions because micro-expression has a small motion and short duration. A large number of features and methods have been proposed, and feature extraction is a critical focus of research. For improving performance, feature fusion is an effective strategy that involves two groups of features, and two groups of features usually have some differences, such as discriminability, distribution and dimension. In addition, the extracted features usually have redundant or misleading feature information. Thus, before feature fusion, an algorithm that can automatically learn and select discriminative features from two groups of different features is needed. In this paper, we propose a kernelized two-groups sparse learning (KTGSL) model to automatically learn more discriminative features from two groups of features. We propose two learning strategies to learn the weights: one is that the weights of one group of features are fixed and don't be learned; the other one is that both groups of weights are learned and the two are given to different penalty coefficients, which can flexibly adjust the interrelation between the two groups of features by adjusting the two penalty coefficients. This work is the first one to select discriminative features from two groups of features in micro-expression recognition. The experiments are conducted on three datasets (CASME II, SMIC and SAMM). The experimental results show that our method can automatically select discriminative features from two groups of features and achieve state-of-the-art performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
479
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
155102722
Full Text :
https://doi.org/10.1016/j.neucom.2021.12.088