Back to Search
Start Over
Attention-Based Multi-Modal Multi-View Fusion Approach for Driver Facial Expression Recognition
- Source :
- IEEE Access, Vol 12, Pp 137203-137221 (2024)
- Publication Year :
- 2024
- Publisher :
- IEEE, 2024.
-
Abstract
- As Advanced Driver Assistance Systems (ADAS) become increasingly intelligent, facial expression recognition (FER) has become a significant requirement for the purpose of monitoring a driver’s emotional state as well as fatigue level. An automobile system with FER is very useful in improving transportation safety by recognizing the driver’s state to provide timely alerts and potentially reduce the likelihood of accidents. While deep neural networks (DNN) based systems have achieved high accuracy in FER in recent years based on data collected under good laboratory environments, recognizing real-world facial expressions remains challenging due to variations in lighting and head pose especially prevalent in the driving scenario. In this paper, we propose an attention-based multi-modal and multi-view fusion FER model that can accurately recognize facial expressions regardless of lighting conditions or head poses, using image data of various modalities including RGB, Near-infrared (NIR), and Depth Maps from different viewpoints. The model is developed on a novel facial expression dataset we collected that includes multiple modalities captured from multiple viewpoints, with varying lighting conditions and head poses. Our multi-modal and multi-view fusion approach shows superior performance compared with models that use data from a single modality/view. The model achieves an accuracy of over 95% when recognizing drivers’ facial expressions in real-world scenarios, even in poor lighting conditions and different head poses.
Details
- Language :
- English
- ISSN :
- 21693536 and 54154987
- Volume :
- 12
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.1d3c34b54154987a4d3a29b616e2646
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2024.3462352