1. Synthetic CT Generation of the Pelvis in Patients With Cervical Cancer: A Single Input Approach Using Generative Adversarial Network
- Author
-
Atallah Baydoun, Elisha T. Fredman, Huan Yang, Raj Mohan Paspulati, Pengjiang Qian, Jin Uk Heo, Rodney J. Ellis, Melanie Traughber, Ke Xu, Tarun Podder, Feifei Zhou, Raymond F. Muzic, Bryan Traughber, and Latoya A. Bethell
- Subjects
General Computer Science ,Channel (digital image) ,Computer science ,Feature extraction ,Computed tomography ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,magnetic resonance imaging ,General Materials Science ,Computer vision ,Radiation treatment planning ,Pelvis ,Cervical cancer ,medicine.diagnostic_test ,business.industry ,Deep learning ,generative adversarial network ,General Engineering ,deep learning ,Magnetic resonance imaging ,computed tomography ,Precision medicine ,medicine.disease ,U-Net ,Radiation exposure ,medicine.anatomical_structure ,Positron emission tomography ,030220 oncology & carcinogenesis ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 - Abstract
Multi-modality imaging constitutes a foundation of precision medicine, especially in oncology where reliable and rapid imaging techniques are needed in order to insure adequate diagnosis and treatment. In cervical cancer, precision oncology requires the acquisition of 18F-labelled 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET), magnetic resonance (MR), and computed tomography (CT) images. Thereafter, images are co-registered to derive electron density attributes required for FDG-PET attenuation correction and radiation therapy planning. Nevertheless, this traditional approach is subject to MR-CT registration defects, expands treatment expenses, and increases the patient's radiation exposure. To overcome these disadvantages, we propose a new framework for cross-modality image synthesis which we apply on MR-CT image translation for cervical cancer diagnosis and treatment. The framework is based on a conditional generative adversarial network (cGAN) and illustrates a novel tactic that addresses, simplistically but efficiently, the paradigm of vanishing gradient vs. feature extraction in deep learning. Its contributions are summarized as follows: 1) The approach-termed sU-cGAN- uses, for the first time, a shallow U-Net (sU-Net) with an encoder/decoder depth of 2 as generator; 2) sU-cGAN's input is the same MR sequence that is used for radiological diagnosis, i.e. T2-weighted, Turbo Spin Echo Single Shot (TSE-SSH) MR images; 3) Despite limited training data and a single input channel approach, sU-cGAN outperforms other state of the art deep learning methods and enables accurate synthetic CT (sCT) generation. In conclusion, the suggested framework should be studied further in the clinical settings. Moreover, the sU-Net model is worth exploring in other computer vision tasks.
- Published
- 2021