Back to Search Start Over

WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects

Authors :
Deutsch, Daniel
Briakou, Eleftheria
Caswell, Isaac
Finkelstein, Mara
Galor, Rebecca
Juraska, Juraj
Kovacs, Geza
Lui, Alison
Rei, Ricardo
Riesa, Jason
Rijhwani, Shruti
Riley, Parker
Salesky, Elizabeth
Trabelsi, Firas
Winkler, Stephanie
Zhang, Biao
Freitag, Markus
Publication Year :
2025

Abstract

As large language models (LLM) become more and more capable in languages other than English, it is important to collect benchmark datasets in order to evaluate their multilingual performance, including on tasks like machine translation (MT). In this work, we extend the WMT24 dataset to cover 55 languages by collecting new human-written references and post-edits for 46 new languages and dialects in addition to post-edits of the references in 8 out of 9 languages in the original WMT24 dataset. The dataset covers four domains: literary, news, social, and speech. We benchmark a variety of MT providers and LLMs on the collected dataset using automatic metrics and find that LLMs are the best-performing MT systems in all 55 languages. These results should be confirmed using a human-based evaluation, which we leave for future work.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.12404
Document Type :
Working Paper