Back to Search Start Over

Anatomy-guided multimodal registration by learning segmentation without ground truth: Application to intraprocedural CBCT/MR liver segmentation and registration.

Authors :
Zhou, Bo
Augenfeld, Zachary
Chapiro, Julius
Zhou, S. Kevin
Liu, Chi
Duncan, James S.
Source :
Medical Image Analysis. Jul2021, Vol. 71, pN.PAG-N.PAG. 1p.
Publication Year :
2021

Abstract

• We proposed an anatomy-guided multimodal registration framework and demonstrated the successful application to CBCT-MR liver registration. • For obtaining the anatomic structure information from CBCT/MR, we proposed an anatomy-preserving domain adaptation to segmentation network (APA2Seg-Net) that allows us to learn segmentation without ground truth. • With the CBCT/MR segmenters obtained from APA2Seg-Net, we proposed a registration pipeline that allows MR and MR-derived information to be accurately registered to CBCT liver to facilitate the TACE procedure. • We believe our design can be extended to similar image-guided intervention procedures, when standard registration techniques are limited due to the non-ideal image quality conditions. [Display omitted] Multimodal image registration has many applications in diagnostic medical imaging and image-guided interventions, such as Transcatheter Arterial Chemoembolization (TACE) of liver cancer guided by intraprocedural CBCT and pre-operative MR. The ability to register peri-procedurally acquired diagnostic images into the intraprocedural environment can potentially improve the intra-procedural tumor targeting, which will significantly improve therapeutic outcomes. However, the intra-procedural CBCT often suffers from suboptimal image quality due to lack of signal calibration for Hounsfield unit, limited FOV, and motion/metal artifacts. These non-ideal conditions make standard intensity-based multimodal registration methods infeasible to generate correct transformation across modalities. While registration based on anatomic structures, such as segmentation or landmarks, provides an efficient alternative, such anatomic structure information is not always available. One can train a deep learning-based anatomy extractor, but it requires large-scale manual annotations on specific modalities, which are often extremely time-consuming to obtain and require expert radiological readers. To tackle these issues, we leverage annotated datasets already existing in a source modality and propose an anatomy-preserving domain adaptation to segmentation network (APA2Seg-Net) for learning segmentation without target modality ground truth. The segmenters are then integrated into our anatomy-guided multimodal registration based on the robust point matching machine. Our experimental results on in-house TACE patient data demonstrated that our APA2Seg-Net can generate robust CBCT and MR liver segmentation, and the anatomy-guided registration framework with these segmenters can provide high-quality multimodal registrations. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13618415
Volume :
71
Database :
Academic Search Index
Journal :
Medical Image Analysis
Publication Type :
Academic Journal
Accession number :
150618862
Full Text :
https://doi.org/10.1016/j.media.2021.102041