Back to Search Start Over

Visual Prompt Engineering for Medical Vision Language Models in Radiology

Authors :
Denner, Stefan
Bujotzek, Markus
Bounias, Dimitrios
Zimmerer, David
Stock, Raphael
Jäger, Paul F.
Maier-Hein, Klaus
Publication Year :
2024

Abstract

Medical image classification in radiology faces significant challenges, particularly in generalizing to unseen pathologies. In contrast, CLIP offers a promising solution by leveraging multimodal learning to improve zero-shot classification performance. However, in the medical domain, lesions can be small and might not be well represented in the embedding space. Therefore, in this paper, we explore the potential of visual prompt engineering to enhance the capabilities of Vision Language Models (VLMs) in radiology. Leveraging BiomedCLIP, trained on extensive biomedical image-text pairs, we investigate the impact of embedding visual markers directly within radiological images to guide the model's attention to critical regions. Our evaluation on the JSRT dataset, focusing on lung nodule malignancy classification, demonstrates that incorporating visual prompts $\unicode{x2013}$ such as arrows, circles, and contours $\unicode{x2013}$ significantly improves classification metrics including AUROC, AUPRC, F1 score, and accuracy. Moreover, the study provides attention maps, showcasing enhanced model interpretability and focus on clinically relevant areas. These findings underscore the efficacy of visual prompt engineering as a straightforward yet powerful approach to advance VLM performance in medical image analysis.<br />Comment: Accepted at ECCV 2024 Workshop on Emergent Visual Abilities and Limits of Foundation Models

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.15802
Document Type :
Working Paper