1. Uncertainty-Aware Adaptive Multiscale U-Net for Low-Contrast Cardiac Image Segmentation
- Author
-
A. S. M. Sharifuzzaman Sagar, Muhammad Zubair Islam, Jawad Tanveer, and Hyung Seok Kim
- Subjects
deep learning ,medical image analysis ,uncertainty estimation ,segmentation ,radiomics ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Medical image analysis is critical for diagnosing and planning treatments, particularly in addressing heart disease, a leading cause of mortality worldwide. Precise segmentation of the left atrium, a key structure in cardiac imaging, is essential for detecting conditions such as atrial fibrillation, heart failure, and stroke. However, its complex anatomy, subtle boundaries, and inter-patient variations make accurate segmentation challenging for traditional methods. Recent advancements in deep learning, especially semantic segmentation, have shown promise in addressing these limitations by enabling detailed, pixel-wise classification. This study proposes a novel segmentation framework Adaptive Multiscale U-Net (AMU-Net) combining Convolutional Neural Networks (CNNs) and transformer-based encoder–decoder architectures. The framework introduces a Contextual Dynamic Encoder (CDE) for extracting multi-scale features and capturing long-range dependencies. An Adaptive Feature Decoder Block (AFDB), leveraging an Adaptive Feature Attention Block (AFAB) improves boundary delineation. Additionally, a Spectral Synthesis Fusion Head (SFFH) synthesizes spectral and spatial features, enhancing segmentation performance in low-contrast regions. To ensure robustness, data augmentation techniques such as rotation, scaling, and flipping are applied. Laplacian approximation is employed for uncertainty estimation, enabling interpretability and identifying regions of low confidence. Our proposed model achieves a Dice score of 93.35, a Precision of 94.12, and a Recall of 92.78, outperforming existing methods.
- Published
- 2025
- Full Text
- View/download PDF