1. Development of attenuation correction methods using deep learning in brain‐perfusion single‐photon emission computed tomography
- Author
-
Hiroki Suyari, Masato Tsuneda, Takuro Horikoshi, Joji Ota, Hajime Yokota, Takashi Iimori, Yoshitada Masuda, Ryuna Kurosawa, Takashi Uno, Taisuke Murata, Ryuhei Yamato, Koichi Sawada, Takuma Hashimoto, and Yasukuni Mori
- Subjects
Single Photon Emission Computed Tomography Computed Tomography ,Wilcoxon signed-rank test ,Perfusion scanning ,Single-photon emission computed tomography ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Image Processing, Computer-Assisted ,medicine ,Humans ,Generalizability theory ,Mathematics ,Tomography, Emission-Computed, Single-Photon ,medicine.diagnostic_test ,business.industry ,Deep learning ,Brain ,General Medicine ,Autoencoder ,Perfusion ,030220 oncology & carcinogenesis ,Artificial intelligence ,Nuclear medicine ,business ,Correction for attenuation ,Emission computed tomography - Abstract
PURPOSE Computed tomography (CT)-based attenuation correction (CTAC) in single-photon emission computed tomography (SPECT) is highly accurate, but it requires hybrid SPECT/CT instruments and additional radiation exposure. To obtain attenuation correction (AC) without the need for additional CT images, a deep learning method was used to generate pseudo-CT images has previously been reported, but it is limited because of cross-modality transformation, resulting in misalignment and modality-specific artifacts. This study aimed to develop a deep learning-based approach using non-attenuation-corrected (NAC) images and CTAC-based images for training to yield AC images in brain-perfusion SPECT. This study also investigated whether the proposed approach is superior to conventional Chang's AC (ChangAC). METHODS In total, 236 patients who underwent brain-perfusion SPECT were randomly divided into two groups: the training group (189 patients; 80%) and the test group (47 patients; 20%). Two models were constructed using Autoencoder (AutoencoderAC) and U-Net (U-NetAC), respectively. ChangAC, AutoencoderAC, and U-NetAC approaches were compared with CTAC using qualitative analysis (visual evaluation) and quantitative analysis (normalized mean squared error [NMSE] and the percentage error in each brain region). Statistical analyses were performed using the Wilcoxon signed-rank sum test and Bland-Altman analysis. RESULTS U-NetAC had the highest visual evaluation score. The NMSE results for the U-NetAC were the lowest, followed by AutoencoderAC and ChangAC (P
- Published
- 2021