Back to Search Start Over

One-shot Localization and Segmentation of Medical Images with Foundation Models

Authors :
Anand, Deepa
M, Gurunath Reddy
Singhal, Vanika
Shanbhag, Dattesh D.
KS, Shriram
Patil, Uday
Bhushan, Chitresh
Manickam, Kavitha
Gui, Dawei
Mullick, Rakesh
Gopal, Avinash
Bhatia, Parminder
Kass-Hout, Taha
Publication Year :
2023

Abstract

Recent advances in Vision Transformers (ViT) and Stable Diffusion (SD) models with their ability to capture rich semantic features of the image have been used for image correspondence tasks on natural images. In this paper, we examine the ability of a variety of pre-trained ViT (DINO, DINOv2, SAM, CLIP) and SD models, trained exclusively on natural images, for solving the correspondence problems on medical images. While many works have made a case for in-domain training, we show that the models trained on natural images can offer good performance on medical images across different modalities (CT,MR,Ultrasound) sourced from various manufacturers, over multiple anatomical regions (brain, thorax, abdomen, extremities), and on wide variety of tasks. Further, we leverage the correspondence with respect to a template image to prompt a Segment Anything (SAM) model to arrive at single shot segmentation, achieving dice range of 62%-90% across tasks, using just one image as reference. We also show that our single-shot method outperforms the recently proposed few-shot segmentation method - UniverSeg (Dice range 47%-80%) on most of the semantic segmentation tasks(six out of seven) across medical imaging modalities.<br />Comment: Accepted at NeurIPS 2023 R0-FoMo Workshop

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.18642
Document Type :
Working Paper