Back to Search Start Over

Missing information, unresponsive authors, experimental flaws: the impossibility of assessing the reproducibility of previous human evaluations in NLP

Authors :
Belz, Anya
Thomson, Craig
Reiter, Ehud
Abercrombie, Gavin
Alonso-Moral, Jose M.
Arvan, Mohammad
Cheung, Jackie
Cieliebak, Mark
Clark, Elizabeth
van Deemter, Kees
Kelleher, John D.
Klubička, Filip
Publication Year :
2023
Publisher :
Association for Computational Linguistics (ACL), 2023.

Abstract

We report our efforts in identifying a set of previous humane valuations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting are production questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative)finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP

Subjects

Subjects :
Computational linguistics

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.od.......119..b6ff11f020bd3d53ef3f7d4a6242f5da