Back to Search Start Over

Towards cross-modal organ translation and segmentation: A cycle- and shape-consistent generative adversarial network.

Authors :
Cai, Jinzheng
Zhang, Zizhao
Cui, Lei
Zheng, Yefeng
Yang, Lin
Source :
Medical Image Analysis. Feb2019, Vol. 52, p174-184. 11p.
Publication Year :
2019

Abstract

Highlights • This paper shows extensive experiments to validate our method, including CT and MRI translation for pancreas segmentation and domain adaptation for mammogram X-rays for breast lesion segmentation. • This paper validates our method using a variety of advanced segmentation networks and show that our method performs generally well and consistently boosts medical 2D/3D image segmentation performance. • This paper systemically analyzes the effect of synthetic data on segmentation, with the goal to investigate the limitation of synthetic data, and inspire new scopes. Graphical abstract Abstract Synthesized medical images have several important applications. For instance, they can be used as an intermedium in cross-modality image registration or used as augmented training samples to boost the generalization capability of a classifier. In this work, we propose a generic cross-modality synthesis approach with the following targets: 1) synthesizing realistic looking 2D/3D images without needing paired training data, 2) ensuring consistent anatomical structures, which could be changed by geometric distortion in cross-modality synthesis and 3) more importantly, improving volume segmentation by using synthetic data for modalities with limited training samples. We show that these goals can be achieved with an end-to-end 2D/3D convolutional neural network (CNN) composed of mutually-beneficial generators and segmentors for image synthesis and segmentation tasks. The generators are trained with an adversarial loss, a cycle-consistency loss, and also a shape-consistency loss (supervised by segmentors) to reduce the geometric distortion. From the segmentation view, the segmentors are boosted by synthetic data from generators in an online manner. Generators and segmentors prompt each other alternatively in an end-to-end training fashion. We validate our proposed method on three datasets, including cardiovascular CT and magnetic resonance imaging (MRI), abdominal CT and MRI, and mammography X-rays from different data domains, showing both tasks are beneficial to each other and coupling these two tasks results in better performance than solving them exclusively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13618415
Volume :
52
Database :
Academic Search Index
Journal :
Medical Image Analysis
Publication Type :
Academic Journal
Accession number :
134151872
Full Text :
https://doi.org/10.1016/j.media.2018.12.002