101. MLAF-CapsNet: Multi-lane atrous feature fusion capsule network with contrast limited adaptive histogram equalization for brain tumor classification from MRI images
- Author
-
Kwabena Adu, Jingye Cai, Yongbin Yu, Kwabena Owusu-Agyemang, and Patrick Kwabena Mensah
- Subjects
Statistics and Probability ,Feature fusion ,Computer science ,business.industry ,media_common.quotation_subject ,General Engineering ,Brain tumor ,Capsule ,Pattern recognition ,02 engineering and technology ,medicine.disease ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Mri image ,0302 clinical medicine ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Contrast (vision) ,020201 artificial intelligence & image processing ,Adaptive histogram equalization ,Artificial intelligence ,business ,media_common - Abstract
Convolutional neural networks (CNNs) for automatic classification and medical image diagnosis have recently displayed a remarkable performance. However, the CNNs fail to recognize original images rotated and oriented differently, limiting their performance. This paper presents a new capsule network (CapsNet) based framework known as the multi-lane atrous feature fusion capsule network (MLAF-CapsNet) for brain tumor type classification. The MLAF-CapsNet consists of atrous and CLAHE, where the atrous increases receptive fields and maintains spatial representation, whereas the CLAHE is used as a base layer that uses an improved adaptive histogram equalization (AHE) to enhance the input images. The proposed method is evaluated using whole-brain tumor and segmented tumor datasets. The efficiency performance of the two datasets is explored and compared. The experimental results of the MLAF-CapsNet show better accuracies (93.40% and 96.60%) and precisions (94.21% and 96.55%) in feature extraction based on the original images from the two datasets than the traditional CapsNet (78.93% and 97.30%). Based on the two datasets’ augmentation, the proposed method achieved the best accuracy (98.48% and 98.82%) and precisions (98.88% and 98.58%) in extracting features compared to the traditional CapsNet. Our results indicate that the proposed method can successfully improve brain tumor classification problems and support radiologists in medical diagnostics.
- Published
- 2021