1. Understanding the role of explanations in computer vision applications
- Author
-
Alqaraawi, Ahmed
- Abstract
Recent advancements in AI show great performance over a range of applications, but its operations are hard to interpret, even for experts. Various explanation algorithms have been proposed to address this issue, yet limited research effort has been reported concerning their user evaluation. Against this background, this thesis reports on four user studies designed to investigate the role of explanations in helping end-users build a better functional understanding of computer vision processes. In addition, we seek to understand what features lay users attend to in order to build such functional understanding, and whether different techniques provide different gains. In particular, we begin by examining the utility of "keypoint markers"; coloured dot visualisations that correspond to patterns of interest identified by an underlying algorithm and can be seen in many computer vision applications. We then investigate the utility of saliency maps; a popular group of explanations for the operation of Convolutional Neural Networks (CNNs). The findings indicate that keypoint markers can be helpful if they are presented in line with users' expectations. They also indicate that saliency maps can improve participants' ability to predict the outcome of a CNN, but only moderately. Overall, this thesis contributes by evaluating these explanation techniques through user studies. It also provides a number of key findings that provide helpful guidelines for practitioners on how and when to use these explanations, as well as which types of users to target. Furthermore, it proposes and evaluates two novel explanation techniques as well as a number of helpful tools that help researchers and practitioners when designing user studies around the evaluation of explanations. Finally, this thesis highlights a number of implications for the design of explanation techniques and further research in that area.
- Published
- 2022