Back to Search Start Over

A survey on multimodal-guided visual content synthesis.

Authors :
Zhang, Ziqi
Li, Zeyu
Wei, Kun
Pan, Siduo
Deng, Cheng
Source :
Neurocomputing. Aug2022, Vol. 497, p110-128. 19p.
Publication Year :
2022

Abstract

With the increasing interest in various creative scenes such as social media, film production, and intelligence courses, people expect to be able to compile rich visual content according to their subjective ideas and actual needs. In this context, visual content synthesis technique based on multimodal data has attracted much attention in recent years. Compared to traditional generative methods, multimodal data offer more flexible and concrete clues that provide an interactive and controllable way to generate the desired visual content. In this survey, we comprehensively summarize the improvements in multimodal-guided visual content synthesis. We first formulate the taxonomy of visual content synthesis and divide it into four different subfields depending on the input modality, including visual-guided visual content synthesis, text-guided visual content synthesis, audio-guided visual content synthesis, and visual content synthesis guided by other modalities. In each subfield, we describe the paradigm of different modality-guided visual content synthesis, and also discuss the signature methods mainly based on Generative Adversarial Networks (GANs). Next, we present commonly used benchmark datasets and metrics for evaluating models, as well as detailed comparisons between different methods. Finally, we provide insight into current research challenges and possible future research directions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
497
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
157104673
Full Text :
https://doi.org/10.1016/j.neucom.2022.04.126