1. Aided Diagnosis Model Based on Deep Learning for Glioblastoma, Solitary Brain Metastases, and Primary Central Nervous System Lymphoma with Multi-Modal MRI.
- Author
-
Liu, Xiao and Liu, Jie
- Subjects
- *
DEEP learning , *CENTRAL nervous system , *CINGULATE cortex , *CANCER diagnosis , *DIAGNOSIS , *MAGNETIC resonance imaging ,CENTRAL nervous system tumors - Abstract
Simple Summary: Diagnosing glioblastoma multiforme (GBM), solitary brain metastases (SBM), and primary central nervous system lymphoma (PCNSL) in malignant tumors of the central nervous system using multi-modal magnetic resonance imaging (MRI) is significantly important in helping physicians develop treatment plans and enhance patient prognosis. In this paper, MFFC-Net is developed and validated using deep learning methods to predict these three tumor categories from multi-modal MRI without the manual region of interest (ROI). MFFC-Net first uses a multi-encoder with DenseBlocks to extract deep features from multi-modal MRI. Then, the feature fusion layer fuses the deep information between different modalities and tissues. Finally, the spatial-channel attention module suppresses redundant new information and activates tumor classification-related features. Compared with radiomics models, MFFC-Net demonstrated higher accuracy. In addition, the results in the different sequences provide important references for future clinical work on MRI image acquisition. We believe that MFFC-Net has the potential to assist in the diagnosis and treatment of brain tumors in the future. (1) Background: Diagnosis of glioblastoma (GBM), solitary brain metastases (SBM), and primary central nervous system lymphoma (PCNSL) plays a decisive role in the development of personalized treatment plans. Constructing a deep learning classification network to diagnose GBM, SBM, and PCNSL with multi-modal MRI is important and necessary. (2) Subjects: GBM, SBM, and PCNSL were confirmed by histopathology with the multi-modal MRI examination (study from 1225 subjects, average age 53 years, 671 males), 3.0 T T2 fluid-attenuated inversion recovery (T2-Flair), and Contrast-enhanced T1-weighted imaging (CE-T1WI). (3) Methods: This paper introduces MFFC-Net, a classification model based on the fusion of multi-modal MRIs, for the classification of GBM, SBM, and PCNSL. The network architecture consists of parallel encoders using DenseBlocks to extract features from different modalities of MRI images. Subsequently, an L 1 − n o r m feature fusion module is applied to enhance the interrelationships among tumor tissues. Then, a spatial-channel self-attention weighting operation is performed after the feature fusion. Finally, the classification results are obtained using the full convolutional layer (FC) and Soft-max. (4) Results: The ACC of MFFC-Net based on feature fusion was 0.920, better than the radiomics model (ACC of 0.829). There was no significant difference in the ACC compared to the expert radiologist (0.920 vs. 0.924, p = 0.774). (5) Conclusions: Our MFFC-Net model could distinguish GBM, SBM, and PCNSL preoperatively based on multi-modal MRI, with a higher performance than the radiomics model and was comparable to radiologists. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF