Back to Search Start Over

Open Domain Dialogue Generation with Latent Images

Authors :
Yang, Ze
Wu, Wei
Hu, Huang
Xu, Can
Wang, Wei
Li, Zhoujun
Publication Year :
2020

Abstract

We consider grounding open domain dialogues with images. Existing work assumes that both an image and a textual context are available, but image-grounded dialogues by nature are more difficult to obtain than textual dialogues. Thus, we propose learning a response generation model with both image-grounded dialogues and textual dialogues by assuming that the visual scene information at the time of a conversation can be represented by an image, and trying to recover the latent images of the textual dialogues through text-to-image generation techniques. The likelihood of the two types of dialogues is then formulated by a response generator and an image reconstructor that are learned within a conditional variational auto-encoding framework. Empirical studies are conducted in both image-grounded conversation and text-based conversation. In the first scenario, image-grounded dialogues, especially under a low-resource setting, can be effectively augmented by textual dialogues with latent images; while in the second scenario, latent images can enrich the content of responses and at the same time keep them relevant to contexts.<br />Comment: AAAI2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2004.01981
Document Type :
Working Paper