Back to Search Start Over

Bridge-GAN: Interpretable Representation Learning for Text-to-Image Synthesis.

Authors :
Yuan, Mingkuan
Peng, Yuxin
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Nov2020, Vol. 30 Issue 11, p4258-4268. 11p.
Publication Year :
2020

Abstract

Text-to-image synthesis is to generate images with the consistent content as the given text description, which is a highly challenging task with two main issues: visual reality and content consistency. Recently, it is available to generate images with high visual reality due to the significant progress of generative adversarial networks. However, translating text description to image with high content consistency is still ambitious. For addressing the above issues, it is reasonable to establish a transitional space with interpretable representation as a bridge to associate text and image. So we propose a text-to-image synthesis approach named Bridge-like Generative Adversarial Networks (Bridge-GAN). Its main contributions are: (1) A transitional space is established as a bridge for improving content consistency, where the interpretable representation can be learned by guaranteeing the key visual information from given text descriptions. (2) A ternary mutual information objective is designed for optimizing the transitional space and enhancing both the visual reality and content consistency. It is proposed under the goal to disentangle the latent factors conditioned on text description for further interpretable representation learning. Comprehensive experiments on two widely-used datasets verify the effectiveness of our Bridge-GAN with the best performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
30
Issue :
11
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
146783113
Full Text :
https://doi.org/10.1109/TCSVT.2019.2953753