1. Reconstruction of patient-specific confounders in AI-based radiologic image interpretation using generative pretraining
- Author
-
Tianyu Han, Laura Žigutytė, Luisa Huck, Marc Sebastian Huppertz, Robert Siepmann, Yossi Gandelsman, Christian Blüthgen, Firas Khader, Christiane Kuhl, Sven Nebelung, Jakob Nikolas Kather, and Daniel Truhn
- Subjects
generative models ,self-supervised training ,medical imaging ,confounders ,counterfactual explanations ,explainability ,Medicine (General) ,R5-920 - Abstract
Summary: Reliably detecting potentially misleading patterns in automated diagnostic assistance systems, such as those powered by artificial intelligence (AI), is crucial for instilling user trust and ensuring reliability. Current techniques fall short in visualizing such confounding factors. We propose DiffChest, a self-conditioned diffusion model trained on 515,704 chest radiographs from 194,956 patients across the US and Europe. DiffChest provides patient-specific explanations and visualizes confounding factors that might mislead the model. The high inter-reader agreement, with Fleiss’ kappa values of 0.8 or higher, validates its capability to identify treatment-related confounders. Confounders are accurately detected with 10%–100% prevalence rates. The pretraining process optimizes the model for relevant imaging information, resulting in excellent diagnostic accuracy for 11 chest conditions, including pleural effusion and heart insufficiency. Our findings highlight the potential of diffusion models in medical image classification, providing insights into confounding factors and enhancing model robustness and reliability.
- Published
- 2024
- Full Text
- View/download PDF