1. Stacked ensemble learning for facial gender classification using deep learning based features extraction.
- Author
-
waris, Fazal, Da, Feipeng, and Liu, Shanghuan
- Subjects
- *
FISHER discriminant analysis , *CONVOLUTIONAL neural networks , *COMPUTER vision , *MACHINE learning , *K-nearest neighbor classification , *DEEP learning , *RETINAL blood vessels - Abstract
Automatic gender classification is an important task in facial analysis. Recently, deep learning has achieved remarkable progress in extracting deep features for face image analysis, particularly in gender classification. Gender classification through facial analysis poses a difficult challenge in extracting reliable and relevant features from facial images and classifying them accurately, especially in uncontrolled environments. So, in this study, a novel deep convolutional neural network (DCNN) aided with convolutional long short-term memory (ConvLSTM) layer and squeeze-and-excitation (SE) block is designed to extract most distinguishable and relevant features from facial images. The ConvLSTM layer aims to establish spatial relationship among the extracted feature maps, while the SE block enhances the emphasis on critical features by directing more attention towards them. The extracted features are pre-processed and deployed to Kernel principle component analysis (KPCA) for dimensionality reduction. Finally, these reduced features are fed into a stacking classifier that utilizes several machine learning classifiers such as K-Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA), Random Forest (RF), AdaBoost, Extra Tree Classifier (ETC), Gaussian Naive Bayes (GNB), XGBoost classifier (XBC), Support Vector classifier (SVC), Bagging Classifier, NuSVC, Gradient boosting classifier (GBC) for the first stage,and SVC as a meta-classifier for final gender prediction with Stratified K-fold cross-validation in order to mitigate the issue of overfitting. The proposed method was evaluated on two publicly available datasets, label faces in the wild (LFW) and Adience, and achieved accuracies of 99.20% and 98.01%, respectively. These results indicate that our proposed approach outperforms existing state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF