Back to Search Start Over

Robust and Interpretable Medical Image Classifiers via Concept Bottleneck Models

Authors :
Yan, An
Wang, Yu
Zhong, Yiwu
He, Zexue
Karypis, Petros
Wang, Zihan
Dong, Chengyu
Gentili, Amilcare
Hsu, Chun-Nan
Shang, Jingbo
McAuley, Julian
Publication Year :
2023

Abstract

Medical image classification is a critical problem for healthcare, with the potential to alleviate the workload of doctors and facilitate diagnoses of patients. However, two challenges arise when deploying deep learning models to real-world healthcare applications. First, neural models tend to learn spurious correlations instead of desired features, which could fall short when generalizing to new domains (e.g., patients with different ages). Second, these black-box models lack interpretability. When making diagnostic predictions, it is important to understand why a model makes a decision for trustworthy and safety considerations. In this paper, to address these two limitations, we propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts. Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model. We systematically evaluate our method on eight medical image classification datasets to verify its effectiveness. On challenging datasets with strong confounding factors, our method can mitigate spurious correlations thus substantially outperform standard visual encoders and other baselines. Finally, we show how classification with a small number of concepts brings a level of interpretability for understanding model decisions through case studies in real medical data.<br />Comment: 18 pages, 12 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.03182
Document Type :
Working Paper