Back to Search
Start Over
CF-GAN: cross-domain feature fusion generative adversarial network for text-to-image synthesis.
- Source :
- Visual Computer; Apr2023, Vol. 39 Issue 4, p1283-1293, 11p
- Publication Year :
- 2023
-
Abstract
- In recent years, generative adversarial networks have successfully synthesized images through text descriptions. However, there are still problems that the generated image cannot be deeply embedded in the text description semantics, the target object of the generated image is incomplete, and the texture structure of the target object is not rich enough. Consequently, we propose a network framework, cross-domain feature fusion generative adversarial network (CF-GAN), which includes two modules, feature fusion-enhanced response module (FFERM) and multi-branch residual module (MBRM), to fine-grain the generated images with the way of deep fusion. FFERM can integrate both the word-level vector features and image features deeply. MBRM is a relatively simple and innovative residual network structure instead of the traditional residual module to extract features fully. We conducted experiments on the CUB and COCO datasets, and the results reveal that the Inception Score has improved from 4.36 to 4.83 (increased by 10.78%) on the CUB dataset, compared with AttnGAN. Compared with DM-GAN, the Inception Score has increased from 30.49 to 31.13 (increased by 2.06%) on the COCO dataset. Extensive experiments and ablation studies demonstrate the proposed CF-GAN's superiority compared to other methods. [ABSTRACT FROM AUTHOR]
- Subjects :
- GENERATIVE adversarial networks
PROBABILISTIC generative models
DEEP learning
Subjects
Details
- Language :
- English
- ISSN :
- 01782789
- Volume :
- 39
- Issue :
- 4
- Database :
- Complementary Index
- Journal :
- Visual Computer
- Publication Type :
- Academic Journal
- Accession number :
- 162802593
- Full Text :
- https://doi.org/10.1007/s00371-022-02404-6