Back to Search Start Over

The Use of Annotations to Explain Labels: Comparing Results from a Human-Rater Approach to a Deep Learning Approach

Authors :
Lottridge, Susan
Woolf, Sherri
Young, Mackenzie
Jafari, Amir
Ormerod, Chris
Source :
Journal of Computer Assisted Learning. Jun 2023 39(3):787-803.
Publication Year :
2023

Abstract

Background: Deep learning methods, where models do not use explicit features and instead rely on implicit features estimated during model training, suffer from an explainability problem. In text classification, saliency maps that reflect the importance of words in prediction are one approach toward explainability. However, little is known about whether the salient words agree with those identified by humans as important. Objectives: The current study examines in-line annotations from human annotators and saliency map annotations from a deep learning model (ELECTRA transformer) to understand how well both humans and machines provide evidence for their assigned label. Methods: Data were responses to test items across a mix of United States subjects, states, and grades. Humans were trained to annotate responses to justify a crisis alert label and two model interpretability methods (LIME, Integrated Gradients) were used to obtain engine annotations. Human inter-annotator agreement and engine agreement with the human annotators were computed and compared. Results and Conclusions: Human annotators agreed with one another at similar rates to those observed in the literature on similar tasks. The annotations derived using the integrated gradients (IG) agreed with human annotators at higher rates than LIME on most metrics; however, both methods underperformed relative to the human annotators. Implications: Saliency map-based engine annotations show promise as a form of explanation, but do not reach human annotation agreement levels. Future work should examine the appropriate unit for annotation (e.g., word, sentence), other gradient based methods, and approaches for mapping the continuous saliency values to Boolean annotations.

Details

Language :
English
ISSN :
0266-4909 and 1365-2729
Volume :
39
Issue :
3
Database :
ERIC
Journal :
Journal of Computer Assisted Learning
Publication Type :
Academic Journal
Accession number :
EJ1378601
Document Type :
Journal Articles<br />Reports - Research
Full Text :
https://doi.org/10.1111/jcal.12784