1. Interpretable thoracic pathologic prediction via learning group-disentangled representation.
- Author
-
Li, Hao, Wu, Yirui, Hu, Hexuan, Lu, Hu, Huang, Qian, and Wan, Shaohua
- Subjects
- *
DEEP learning , *X-ray imaging , *COMPUTER-assisted image analysis (Medicine) , *IMAGE analysis , *FEATURE extraction , *DECISION making - Abstract
Deep learning has brought a significant progress in medical image analysis. However, their lack of interpretability might bring high risk for wrong diagnosis with limited clinical knowledge embedding. In other words, we believe it's crucial for humans to interpret how deep learning work for medical analysis, thus appropriately adding knowledge constraints to correct the bias of wrong results. With such purpose, we propose Representation Group-Disentangling Network (RGD-Net) to explain the process of feature extraction and decision making inside deep learning framework, where we completely disentangle feature space of input X-ray images into independent feature groups, and each group would contribute to diagnose of a specific disease. Specifically, we first state problem definition for interpretable prediction with auto-encoder structure. Then, group-disentangled representations are extracted from input X-ray images with the proposed Group-Disentangle Module, which constructs semantic latent space by enforcing semantic consistency of attributes. Afterwards, adversarial constricts on mapping from features to diseases are proposed to prevent model collapse during training. Finally, a novel design of local tuning medical application is proposed based on RGB-Net, which is capable to aid clinicians for reasonable diagnosis. By conducting quantity of experiments on public datasets, RGD-Net have been superior to comparative studies by leveraging potential factors contributing to different diseases. We believe our work could bring interpretability in digging inherent patterns of deep learning on medical image analysis. • RGD-Net extracts fine-grained, group-disentangled disease representations for better interpretation and prediction. • Adversarial constraint avoids shortcut problems and promotes global minimum convergence in RGD-Net. • RGD-Net significantly improves classification accuracy and disentangles information in experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF