Back to Search Start Over

Building Machine Translation Systems for the Next Thousand Languages

Authors :
Bapna, Ankur
Caswell, Isaac
Kreutzer, Julia
Firat, Orhan
van Esch, Daan
Siddhant, Aditya
Niu, Mengmeng
Baljekar, Pallavi
Garcia, Xavier
Macherey, Wolfgang
Breiner, Theresa
Axelrod, Vera
Riesa, Jason
Cao, Yuan
Chen, Mia Xu
Macherey, Klaus
Krikun, Maxim
Wang, Pidong
Gutkin, Alexander
Shah, Apurva
Huang, Yanping
Chen, Zhifeng
Wu, Yonghui
Hughes, Macduff
Publication Year :
2022

Abstract

In this paper we share findings from our effort to build practical machine translation (MT) systems capable of translating across over one thousand languages. We describe results in three research domains: (i) Building clean, web-mined datasets for 1500+ languages by leveraging semi-supervised pre-training for language identification and developing data-driven filtering techniques; (ii) Developing practical MT models for under-served languages by leveraging massively multilingual models trained with supervised parallel data for over 100 high-resource languages and monolingual datasets for an additional 1000+ languages; and (iii) Studying the limitations of evaluation metrics for these languages and conducting qualitative analysis of the outputs from our MT models, highlighting several frequent error modes of these types of models. We hope that our work provides useful insights to practitioners working towards building MT systems for currently understudied languages, and highlights research directions that can complement the weaknesses of massively multilingual models in data-sparse settings.<br />Comment: V2: updated with some details from 24-language Google Translate launch in May 2022 V3: spelling corrections, additional acknowledgements

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.03983
Document Type :
Working Paper