Saha, Purnata, Ansaruddin Kunju, Ali K., Majid, Molla E., Bin Abul Kashem, Saad, Nashbat, Mohammad, Ashraf, Azad, Hasan, Mazhar, Khandakar, Amith, Shafayet Hossain, Md, Alqahtani, Abdulrahman, and Chowdhury, Muhammad E.H.
• A novel Deep learning-based approach to reconstruct the EEG signal while preserving its morphology by removing the EOG artifacts. • Provide comprehensive multimodal performance analysis of EEG, ECG and PPG in detecting emotions. • Investigate a large number of time, frequency, and time–frequency features for emotion recognition. • Novel multimodal network for emotion recognition outperforming the state-of-the-art reported performance. Emotion Recognition Systems (ERS) play a pivotal role in facilitating naturalistic Human-Machine Interactions (HMI). The research has utilized a dataset with diverse physiological signals, including Electroencephalogram (EEG), Photoplethysmography (PPG), and Electrocardiogram (ECG), to detect emotions evoked by video stimuli. The study has addressed challenges with EEG data, particularly prefrontal channels contaminated by eye blink artifacts. To tackle this, a novel 1D deep learning model, MultiResUNet3p, effectively generated clean EEG signals. Extensive features have been extracted from each modality (TD, FD, TFD), and the study identified that combining 112 features from EEG and ECG achieved the highest accuracy. The emotion classification task encompassed six emotions, and the model demonstrates outstanding performance with 96.12% accuracy in binary classification (Positive vs. Negative) and 94.25% accuracy in a complex multiclass classification of six emotions (Happy, Anger, Disgust, Fear, Neutral, and Sad). This research underscores the potential of integrating multiple physiological signals and advanced techniques to significantly improve emotion recognition accuracy, particularly in real-world scenarios involving naturalistic Human-Machine Interactions (HMI). [ABSTRACT FROM AUTHOR]