Back to Search Start Over

Evaluation of automatic video captioning using direct assessment.

Authors :
Graham, Yvette
Awad, George
Smeaton, Alan
Source :
PLoS ONE; 9/4/2018, Vol. 13 Issue 9, p1-20, 20p
Publication Year :
2018

Abstract

We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowd sourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and scales to where there are many caption-generation techniques to be evaluated including the TRECVid video-to-text task in 2017. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19326203
Volume :
13
Issue :
9
Database :
Complementary Index
Journal :
PLoS ONE
Publication Type :
Academic Journal
Accession number :
131582069
Full Text :
https://doi.org/10.1371/journal.pone.0202789