Back to Search Start Over

How Much Does Tokenization Affect Neural Machine Translation?

Authors :
Domingo, Miguel
Garcıa-Martınez, Mercedes
Helle, Alexandre
Casacuberta, Francisco
Herranz, Manuel
Publication Year :
2018

Abstract

Tokenization or segmentation is a wide concept that covers simple processes such as separating punctuation from words, or more sophisticated processes such as applying morphological knowledge. Neural Machine Translation (NMT) requires a limited-size vocabulary for computational cost and enough examples to estimate word embeddings. Separating punctuation and splitting tokens into words or subwords has proven to be helpful to reduce vocabulary and increase the number of examples of each word, improving the translation quality. Tokenization is more challenging when dealing with languages with no separator between words. In order to assess the impact of the tokenization in the quality of the final translation on NMT, we experimented on five tokenizers over ten language pairs. We reached the conclusion that the tokenization significantly affects the final translation quality and that the best tokenizer differs for different language pairs.<br />Comment: Accepted for publication at CICLing 2019

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1812.08621
Document Type :
Working Paper