Back to Search
Start Over
Treatment Learning Causal Transformer for Noisy Image Classification
- Publication Year :
- 2022
-
Abstract
- Current top-notch deep learning (DL) based vision models are primarily based on exploring and exploiting the inherent correlations between training data samples and their associated labels. However, a known practical challenge is their degraded performance against "noisy" data, induced by different circumstances such as spurious correlations, irrelevant contexts, domain shift, and adversarial attacks. In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy by jointly estimating their treatment effects. Motivated from causal variational inference, we propose a transformer-based architecture, Treatment Learning Causal Transformer (TLT), that uses a latent generative model to estimate robust feature representations from current observational input for noise image classification. Depending on the estimated noise level (modeled as a binary treatment factor), TLT assigns the corresponding inference network trained by the designed causal loss for prediction. We also create new noisy image datasets incorporating a wide range of noise factors (e.g., object masking, style transfer, and adversarial perturbation) for performance benchmarking. The superior performance of TLT in noisy image classification is further validated by several refutation evaluation metrics. As a by-product, TLT also improves visual salience methods for perceiving noisy images.<br />Comment: Accepted to IEEE WACV 2023. The first version was finished in May 2018
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2203.15529
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/WACV56688.2023.00608