1. Attention feature fusion methodology with additional constraint for ovarian lesion diagnosis on magnetic resonance images.
- Author
-
Wang, Shuai, Xu, Xiaojuan, Du, Huiqian, Chen, Yan, and Mei, Wenbo
- Subjects
MAGNETIC resonance imaging ,CONTRAST-enhanced magnetic resonance imaging ,INDUCED ovulation ,CONVOLUTIONAL neural networks ,COMPUTER vision ,FEATURE extraction ,TUMOR diagnosis - Abstract
Purpose: It is challenging for radiologists and gynecologists to identify the type of ovarian lesions by reading magnetic resonance (MR) images. Recently developed convolutional neural networks (CNNs) have made great progress in computer vision, but their architectures still need modification if they are used in processing medical images. This study aims to improve the feature extraction capability of CNNs, thus promoting the diagnostic performance in discriminating between benign and malignant ovarian lesions. Methods: We introduce a feature fusion architecture and insert the attention models in the neural network. The features extracted from different middle layers are integrated with reoptimized spatial and channel weights. We add a loss function to constrain the additional probability vector generated from the integrated features, thus guiding the middle layers to emphasize useful information. We analyzed 159 lesions imaged by dynamic contrast‐enhanced MR imaging (DCE‐MRI), including 73 benign lesions and 86 malignant lesions. Senior radiologists selected and labeled the tumor regions based on the pathology reports. Then, the tumor regions were cropped into 7494 nonoverlapping image patches for training and testing. The type of a single tumor was determined by the average probability scores of the image patches belonging to it. Results: We implemented fivefold cross‐validation to characterize our proposed method, and the distribution of performance matrics was reported. For all the test image patches, the average accuracy of our method is 70.5% with an average area under the curve (AUC) of 0.785, while the baseline is 69.4% and 0.773, and for the diagnosis of single tumors, our model achieved an average accuracy of 82.4% and average AUC of 0.916, which were better than the baseline (81.8% and 0.899). Moreover, we evaluated the performance of our proposed method utilizing different CNN backbones and different attention mechanisms. Conclusions: The texture features extracted from different middle layers are crucial for ovarian lesion diagnosis. Our proposed method can enhance the feature extraction capabilities of different layers of the network, thereby improving diagnostic performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF