Back to Search Start Over

m3P: Towards Multimodal Multilingual Translation with Multimodal Prompt

Authors :
Yang, Jian
Guo, Hongcheng
Yin, Yuwei
Bai, Jiaqi
Wang, Bing
Liu, Jiaheng
Liang, Xinnian
Cahi, Linzheng
Yang, Liqun
Li, Zhoujun
Publication Year :
2024

Abstract

Multilingual translation supports multiple translation directions by projecting all languages in a shared space, but the translation quality is undermined by the difference between languages in the text-only modality, especially when the number of languages is large. To bridge this gap, we introduce visual context as the universal language-independent representation to facilitate multilingual translation. In this paper, we propose a framework to leverage the multimodal prompt to guide the Multimodal Multilingual neural Machine Translation (m3P), which aligns the representations of different languages with the same meaning and generates the conditional vision-language memory for translation. We construct a multilingual multimodal instruction dataset (InstrMulti102) to support 102 languages. Our method aims to minimize the representation distance of different languages by regarding the image as a central language. Experimental results show that m3P outperforms previous text-only baselines and multilingual multimodal methods by a large margin. Furthermore, the probing experiments validate the effectiveness of our method in enhancing translation under the low-resource and massively multilingual scenario.<br />Comment: COLING 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.17556
Document Type :
Working Paper