1. Synthesizing high‐resolution magnetic resonance imaging using parallel cycle‐consistent generative adversarial networks for fast magnetic resonance imaging.
- Author
-
Xie, Huiqiao, Lei, Yang, Wang, Tonghe, Roper, Justin, Dhabaan, Anees H., Bradley, Jeffrey D., Liu, Tian, Mao, Hui, and Yang, Xiaofeng
- Subjects
GENERATIVE adversarial networks ,DEEP learning ,MAGNETIC resonance - Abstract
Purpose: The common practice in acquiring the magnetic resonance (MR) images is to obtain two‐dimensional (2D) slices at coarse locations while keeping the high in‐plane resolution in order to ensure enough body coverage while shortening the MR scan time. The aim of this study is to propose a novel method to generate HR MR images from low‐resolution MR images along the longitudinal direction. In order to address the difficulty of collecting paired low‐ and high‐resolution MR images in clinical settings and to gain the advantage of parallel cycle consistent generative adversarial networks (CycleGANs) in synthesizing realistic medical images, we developed a parallel CycleGANs based method using a self‐supervised strategy. Methods and materials: The proposed workflow consists of two parallely trained CycleGANs to independently predict the HR MR images in the two planes along the directions that are orthogonal to the longitudinal MR scan direction. Then, the final synthetic HR MR images are generated by fusing the two predicted images. MR images, including T1‐weighted (T1), contrast enhanced T1‐weighted (T1CE), T2‐weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR), of the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were processed to evaluate the proposed workflow along the cranial–caudal (CC), lateral, and anterior–posterior directions. Institutional collected MR images were also processed for evaluation of the proposed method. The performance of the proposed method was investigated via both qualitative and quantitative evaluations. Metrics of normalized mean absolute error (NMAE), peak signal‐to‐noise ratio (PSNR), edge keeping index (EKI), structural similarity index measurement (SSIM), information fidelity criterion (IFC), and visual information fidelity in pixel domain (VIFP) were calculated. Results: It is shown that the proposed method can generate HR MR images visually indistinguishable from the ground truth in the investigations on the BraTS2020 dataset. In addition, the intensity profiles, difference images and SSIM maps can also confirm the feasibility of the proposed method for synthesizing HR MR images. Quantitative evaluations on the BraTS2020 dataset shows that the calculated metrics of synthetic HR MR images can all be enhanced for the T1, T1CE, T2, and FLAIR images. The enhancements in the numerical metrics over the low‐resolution and bi‐cubic interpolated MR images, as well as those genearted with a comparative deep learning method, are statistically significant. Qualitative evaluation of the synthetic HR MR images of the clinical collected dataset could also confirm the feasibility of the proposed method. Conclusions: The proposed method is feasible to synthesize HR MR images using self‐supervised parallel CycleGANs, which can be expected to shorten MR acquisition time in clinical practices. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF