1. Defense Against Explanation Manipulation
- Author
-
Tang, Ruixiang, Liu, Ninghao, Yang, Fan, Zou, Na, and Hu, Xia
- Subjects
FOS: Computer and information sciences ,explainable artificial intelligence (XAI) ,Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,deep learning ,Information technology ,T58.5-58.64 ,post-hoc explanations ,Machine Learning (cs.LG) ,Artificial Intelligence (cs.AI) ,Artificial Intelligence ,Computer Science (miscellaneous) ,Information Systems ,adversarial attack and defense ,data augmentation - Abstract
Explainable machine learning attracts increasing attention as it improves the transparency of models, which is helpful for machine learning to be trusted in real applications. However, explanation methods have recently been demonstrated to be vulnerable to manipulation, where we can easily change a model's explanation while keeping its prediction constant. To tackle this problem, some efforts have been paid to use more stable explanation methods or to change model configurations. In this work, we tackle the problem from the training perspective, and propose a new training scheme called Adversarial Training on EXplanations (ATEX) to improve the internal explanation stability of a model regardless of the specific explanation method being applied. Instead of directly specifying explanation values over data instances, ATEX only puts constraints on model predictions which avoids involving second-order derivatives in optimization. As a further discussion, we also find that explanation stability is closely related to another property of the model, i.e., the risk of being exposed to adversarial attack. Through experiments, besides showing that ATEX improves model robustness against manipulation targeting explanation, it also brings additional benefits including smoothing explanations and improving the efficacy of adversarial training if applied to the model.
- Published
- 2022
- Full Text
- View/download PDF