Back to Search Start Over

Deep‐learning‐based image registration and automatic segmentation of organs‐at‐risk in cone‐beam CT scans from high‐dose radiation treatment of pancreatic cancer.

Authors :
Han, Xu
Hong, Jun
Reyngold, Marsha
Crane, Christopher
Cuaron, John
Hajj, Carla
Mann, Justin
Zinovoy, Melissa
Greer, Hastings
Yorke, Ellen
Mageras, Gig
Niethammer, Marc
Source :
Medical Physics; Jun2021, Vol. 48 Issue 6, p3084-3095, 12p
Publication Year :
2021

Abstract

Purpose: Accurate deformable registration between computed tomography (CT) and cone‐beam CT (CBCT) images of pancreatic cancer patients treated with high biologically effective radiation doses is essential to assess changes in organ‐at‐risk (OAR) locations and shapes and to compute delivered dose. This study describes the development and evaluation of a deep‐learning (DL) registration model to predict OAR segmentations on the CBCT derived from segmentations on the planning CT. Methods: The DL model is trained with CT‐CBCT image pairs of the same patient, on which OAR segmentations of the small bowel, stomach, and duodenum have been manually drawn. A transformation map is obtained, which serves to warp the CT image and segmentations. In addition to a regularity loss and an image similarity loss, an OAR segmentation similarity loss is also used during training, which penalizes the mismatch between warped CT segmentations and manually drawn CBCT segmentations. At test time, CBCT segmentations are not required as they are instead obtained from the warped CT segmentations. In an IRB‐approved retrospective study, a dataset consisting of 40 patients, each with one planning CT and two CBCT scans, was used in a fivefold cross‐validation to train and evaluate the model, using physician‐drawn segmentations as reference. Images were preprocessed to remove gas pockets. Network performance was compared to two intensity‐based deformable registration algorithms (large deformation diffeomorphic metric mapping [LDDMM] and multimodality free‐form [MMFF]) as baseline. Evaluated metrics were Dice similarity coefficient (DSC), change in OAR volume within a volume of interest (enclosing the low‐dose PTV plus 1 cm margin) from planning CT to CBCT, and maximum dose to 5 cm3 of the OAR [D(5cc)]. Results: Processing time for one CT‐CBCT registration with the DL model at test time was less than 5 seconds on a GPU‐based system, compared to an average of 30 minutes for LDDMM optimization. For both small bowel and stomach/duodenum, the DL model yielded larger median DSC and smaller interquartile variation than either MMFF (paired t‐test P < 10−4 for both type of OARs) or LDDMM (P < 10−3 and P = 0.03 respectively). Root‐mean‐square deviation (RMSD) of DL‐predicted change in small bowel volume relative to reference was 22% less than for MMFF (P = 0.007). RMSD of DL‐predicted stomach/duodenum volume change was 28% less than for LDDMM (P = 0.0001). RMSD of DL‐predicted D(5cc) in small bowel was 39% less than for MMFF (P = 0.001); in stomach/duodenum, RMSD of DL‐predicted D(5cc) was 18% less than for LDDMM (P < 10−3). Conclusions: The proposed deep network CT‐to‐CBCT deformable registration model shows improved segmentation accuracy compared to intensity‐based algorithms and achieves an order‐of‐magnitude reduction in processing time. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00942405
Volume :
48
Issue :
6
Database :
Complementary Index
Journal :
Medical Physics
Publication Type :
Academic Journal
Accession number :
151285953
Full Text :
https://doi.org/10.1002/mp.14906