1. Latent Variable Model for Multi-modal Translation
- Author
-
Calixto, I., Rios, M., Aziz, W., Korhonen, A., Traum, D., Màrquez, L., ILLC (FNWI), and Language and Computation (ILLC, FNWI/FGw)
- Subjects
FOS: Computer and information sciences ,Machine translation ,Computer science ,02 engineering and technology ,Latent variable ,010501 environmental sciences ,computer.software_genre ,Translation (geometry) ,01 natural sciences ,Synthetic data ,0202 electrical engineering, electronic engineering, information engineering ,Latent variable model ,0105 earth and related environmental sciences ,Computer Science - Computation and Language ,business.industry ,I.2.7 ,Pattern recognition ,Mutual information ,Term (time) ,Constraint (information theory) ,Feature (computer vision) ,Embedding ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Computation and Language (cs.CL) - Abstract
In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and K\'ad\'ar, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the minimum amount of information encoded in the latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data)., Comment: Paper accepted at ACL 2019. Contains 8 pages (11 including references, 13 including appendix), 6 figures
- Published
- 2018