1. Variations in Relevance Judgments and the Shelf Life of Test Collections
- Author
-
Parry, Andrew, Fröbe, Maik, Scells, Harrisen, Schlatt, Ferdinand, Faggioli, Guglielmo, Zerhoudi, Saber, MacAvaney, Sean, and Yang, Eugene
- Subjects
Computer Science - Information Retrieval - Abstract
The fundamental property of Cranfield-style evaluations, that system rankings are stable even when assessors disagree on individual relevance decisions, was validated on traditional test collections. However, the paradigm shift towards neural retrieval models affected the characteristics of modern test collections, e.g., documents are short, judged with four grades of relevance, and information needs have no descriptions or narratives. Under these changes, it is unclear whether assessor disagreement remains negligible for system comparisons. We investigate this aspect under the additional condition that the few modern test collections are heavily re-used. Given more possible query interpretations due to less formalized information needs, an ''expiration date'' for test collections might be needed if top-effectiveness requires overfitting to a single interpretation of relevance. We run a reproducibility study and re-annotate the relevance judgments of the 2019 TREC Deep Learning track. We can reproduce prior work in the neural retrieval setting, showing that assessor disagreement does not affect system rankings. However, we observe that some models substantially degrade with our new relevance judgments, and some have already reached the effectiveness of humans as rankers, providing evidence that test collections can expire., Comment: 11 pages, 6 tables, 5 figures
- Published
- 2025