1. Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation
- Author
-
Chung-Yang Huang, Winston H. Hsu, Yen-Liang Lin, and Kuan-Lun Tseng
- Subjects
FOS: Computer and information sciences ,Modality (human–computer interaction) ,Artificial neural network ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Deep learning ,Computer Science - Computer Vision and Pattern Recognition ,Scale-space segmentation ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Machine learning ,computer.software_genre ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,Sequence learning ,business ,computer - Abstract
Deep learning models such as convolutional neural net- work have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels. To better leverage the multi- modalities, we propose a deep encoder-decoder structure with cross-modality convolution layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM to model a sequence of 2D slices, and jointly learn the multi-modalities and convolutional LSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two-phase training to handle the label imbalance. Experimental results on BRATS-2015 show that our method outperforms state-of-the-art biomedical segmentation approaches., CVPR 2017
- Published
- 2017
- Full Text
- View/download PDF