Back to Search Start Over

Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

Authors :
Zhang, Wencan
Dimiccoli, Mariella
Lim, Brian Y.
Publication Year :
2022

Abstract

Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations improved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases.<br />Comment: This work was intended as a replacement of arXiv:2012.05567 and any subsequent updates will appear there

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2201.12835
Document Type :
Working Paper