1. Compare and contrast: Detecting mammographic soft-tissue lesions with C[formula omitted]-Net.
- Author
-
Liu, Yuhang, Zhou, Changsheng, Zhang, Fandong, Zhang, Qianyi, Wang, Siwen, Zhou, Juan, Sheng, Fugeng, Wang, Xiaoqi, Liu, Wanhua, Wang, Yizhou, Yu, Yizhou, and Lu, Guangming
- Subjects
- *
DEEP learning , *BREAST cancer , *MAMMOGRAMS , *BREAST , *LEARNING modules , *FUSED deposition modeling - Abstract
• Propose a new end-to-end deep model called C2-Net that effectively exploits multi-view information for mammogram soft-tissue lesion detection. • Propose three specific modules to compare ipsilateral features, contrast bilateral features and fuse multi-view features respectively. • Experimental results on both the DDSM dataset and a large multi-center in-house dataset shows our model achieves state-of-the-art performance. The architecture of the proposed C2-Net. The model takes the multiview mammogram images as input and produces the enhanced representation for further detection. The major components are Spatial Context Enhancing (SCE) module, Multi-scale Kernel Pooling (MKP) module and Logic Guided Fusion (LGF) module. SCE module learns to compare corresponding regions of cross-view mammogram images, while MKP module tolerates the geometric deformation for robust contrast of asymmetry. Furthermore, LGF module distills the enhanced representation by fusing multiview features. [Display omitted] Detecting breast soft-tissue lesions including masses, structural distortions and asymmetries is of great importance due to the high risk leading to breast cancer. Most existing deep learning based approaches detect lesions with only unilateral images. However, multi-view mammogram images provide highly related and complementary information which helps to make the clinical analysis more comprehensive and reliable. In this paper, we propose a multi-view network for breast soft-tissue lesion detection called C 2 -Net (Compare and Contrast, C 2) that fuses information across different views. The proposed model contains the following three modules. The spatial context enhancing (SCE) module compares ipsilateral views and extracts complementary features to model lesion inherent 3D structure. The multi-scale kernel pooling (MKP) module contrasts contralateral views with added misalignment tolerance. Finally, the logic guided fusion (LGF) module fuses multi-view features by enhancing logic modeling capacity. Experimental results on both the public DDSM dataset and the in-house multi-center dataset demonstrate that the proposed method has achieved state-of-the-art performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF