Back to Search Start Over

LLMJudge: LLMs for Relevance Judgments

Authors :
Rahmani, Hossein A.
Yilmaz, Emine
Craswell, Nick
Mitra, Bhaskar
Thomas, Paul
Clarke, Charles L. A.
Aliannejadi, Mohammad
Siro, Clemencia
Faggioli, Guglielmo
Publication Year :
2024

Abstract

The LLMJudge challenge is organized as part of the LLM4Eval workshop at SIGIR 2024. Test collections are essential for evaluating information retrieval (IR) systems. The evaluation and tuning of a search system is largely based on relevance labels, which indicate whether a document is useful for a specific search and user. However, collecting relevance judgments on a large scale is costly and resource-intensive. Consequently, typical experiments rely on third-party labelers who may not always produce accurate annotations. The LLMJudge challenge aims to explore an alternative approach by using LLMs to generate relevance judgments. Recent studies have shown that LLMs can generate reliable relevance judgments for search systems. However, it remains unclear which LLMs can match the accuracy of human labelers, which prompts are most effective, how fine-tuned open-source LLMs compare to closed-source LLMs like GPT-4, whether there are biases in synthetically generated data, and if data leakage affects the quality of generated labels. This challenge will investigate these questions, and the collected data will be released as a package to support automatic relevance judgment research in information retrieval and search.<br />Comment: LLMJudge Challenge Overview, 3 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.08896
Document Type :
Working Paper