Back to Search Start Over

Deep learning segmentation of general interventional tools in two‐dimensional ultrasound images.

Authors :
Gillies, Derek J.
Rodgers, Jessica R.
Gyacskov, Igor
Roy, Priyanka
Kakani, Nirmal
Cool, Derek W.
Fenster, Aaron
Source :
Medical Physics; Oct2020, Vol. 47 Issue 10, p4956-4970, 15p
Publication Year :
2020

Abstract

Purpose: Many interventional procedures require the precise placement of needles or therapy applicators (tools) to correctly achieve planned targets for optimal diagnosis or treatment of cancer, typically leveraging the temporal resolution of ultrasound (US) to provide real‐time feedback. Identifying tools in two‐dimensional (2D) images can often be time‐consuming with the precise position difficult to distinguish. We have developed and implemented a deep learning method to segment tools in 2D US images in near real‐time for multiple anatomical sites, despite the widely varying appearances across interventional applications. Methods: A U‐Net architecture with a Dice similarity coefficient (DSC) loss function was used to perform segmentation on input images resized to 256 × 256 pixels. The U‐Net was modified by adding 50% dropouts and the use of transpose convolutions in the decoder section of the network. The proposed approach was trained with 917 images and manual segmentations from prostate/gynecologic brachytherapy, liver ablation, and kidney biopsy/ablation procedures, as well as phantom experiments. Real‐time data augmentation was applied to improve generalizability and doubled the dataset for each epoch. Postprocessing to identify the tool tip and trajectory was performed using two different approaches, comparing the largest island with a linear fit to random sample consensus (RANSAC) fitting. Results: Comparing predictions from 315 unseen test images to manual segmentations, the overall median [first quartile, third quartile] tip error, angular error, and DSC were 3.5 [1.3, 13.5] mm, 0.8 [0.3, 1.7]°, and 73.3 [56.2, 82.3]%, respectively, following RANSAC postprocessing. The predictions with the lowest median tip and angular errors were observed in the gynecologic images (median tip error: 0.3 mm; median angular error: 0.4°) with the highest errors in the kidney images (median tip error: 10.1 mm; median angular error: 2.9°). The performance on the kidney images was likely due to a reduction in acoustic signal associated with oblique insertions relative to the US probe and the increased number of anatomical interfaces with similar echogenicity. Unprocessed segmentations were performed with a mean time of approximately 50 ms per image. Conclusions: We have demonstrated that our proposed approach can accurately segment tools in 2D US images from multiple anatomical locations and a variety of clinical interventional procedures in near real‐time, providing the potential to improve image guidance during a broad range of diagnostic and therapeutic cancer interventions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00942405
Volume :
47
Issue :
10
Database :
Complementary Index
Journal :
Medical Physics
Publication Type :
Academic Journal
Accession number :
146607739
Full Text :
https://doi.org/10.1002/mp.14427