Back to Search Start Over

Gradations of Error Severity in Automatic Image Descriptions

Authors :
van Miltenburg, Emiel
Lu, Wei-Ting
Krahmer, Emiel
Gatt, Albert
Chen, Guanyi
Li, Lin
van Deemter, Kees
Davis, Brian
Graham, Yvette
Kelleher, John
Sripada, Yaji
Sub Natural Language Processing
Natural Language Processing
Publication Year :
2020

Abstract

Earlier research has shown that evaluation metrics based on textual similarity (e.g., BLEU, CIDEr, Meteor) do not correlate well with human evaluation scores for automatically generated text. We carried out an experiment with Chinese speakers, where we systematically manipulated image descriptions to contain different kinds of errors. Because our manipulated descriptions form minimal pairs with the reference descriptions, we are able to assess the impact of different kinds of errors on the perceived quality of the descriptions. Our results show that different kinds of errors elicit significantly different evaluation scores, even though all erroneous descriptions differ in only one character from the reference descriptions. Evaluation metrics based solely on textual similarity are unable to capture these differences, which (at least partially) explains their poor correlation with human judgments. Our work provides the foundations for future work, where we aim to understand why different errors are seen as more or less severe.

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.od.......101..21c82a55f1e0bfad43b5d50940f9416e