1. MM-Eval: A Multilingual Meta-Evaluation Benchmark for LLM-as-a-Judge and Reward Models
- Author
-
Son, Guijin, Yoon, Dongkeun, Suk, Juyoung, Aula-Blasco, Javier, Aslan, Mano, Kim, Vu Trong, Islam, Shayekh Bin, Prats-Cristià, Jaume, Tormo-Bañuelos, Lucía, and Kim, Seungone
- Subjects
Computer Science - Computation and Language - Abstract
Large language models (LLMs) are commonly used as evaluators in tasks (e.g., reward modeling, LLM-as-a-judge), where they act as proxies for human preferences or judgments. This leads to the need for meta-evaluation: evaluating the credibility of LLMs as evaluators. However, existing benchmarks primarily focus on English, offering limited insight into LLMs' effectiveness as evaluators in non-English contexts. To address this, we introduce MM-Eval, a multilingual meta-evaluation benchmark that covers 18 languages across six categories. MM-Eval evaluates various dimensions, including language-specific challenges like linguistics and language hallucinations. Evaluation results show that both proprietary and open-source language models have considerable room for improvement. Further analysis reveals a tendency for these models to assign middle-ground scores to low-resource languages. We publicly release our benchmark and code., Comment: work in progress
- Published
- 2024