1. AttDiCNN: Attentive Dilated Convolutional Neural Network for Automatic Sleep Staging using Visibility Graph and Force-directed Layout
- Author
-
Jobayer, Md, Shawon, Md. Mehedi Hasan, Mahmud, Tasfin, Antor, Md. Borhan Uddin, and Chowdhury, Arshad M.
- Subjects
Electrical Engineering and Systems Science - Signal Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Human-Computer Interaction ,Computer Science - Machine Learning - Abstract
Sleep stages play an essential role in the identification of sleep patterns and the diagnosis of sleep disorders. In this study, we present an automated sleep stage classifier termed the Attentive Dilated Convolutional Neural Network (AttDiCNN), which uses deep learning methodologies to address challenges related to data heterogeneity, computational complexity, and reliable automatic sleep staging. We employed a force-directed layout based on the visibility graph to capture the most significant information from the EEG signals, representing the spatial-temporal features. The proposed network consists of three compositors: the Localized Spatial Feature Extraction Network (LSFE), the Spatio-Temporal-Temporal Long Retention Network (S2TLR), and the Global Averaging Attention Network (G2A). The LSFE is tasked with capturing spatial information from sleep data, the S2TLR is designed to extract the most pertinent information in long-term contexts, and the G2A reduces computational overhead by aggregating information from the LSFE and S2TLR. We evaluated the performance of our model on three comprehensive and publicly accessible datasets, achieving state-of-the-art accuracy of 98.56%, 99.66%, and 99.08% for the EDFX, HMC, and NCH datasets, respectively, yet maintaining a low computational complexity with 1.4 M parameters. The results substantiate that our proposed architecture surpasses existing methodologies in several performance metrics, thus proving its potential as an automated tool in clinical settings., Comment: In review to IEEEtrans NNLS; 15-pages main paper and 3-pages supplementary material
- Published
- 2024