Back to Search Start Over

A Study on Webtoon Generation Using CLIP and Diffusion Models †.

Authors :
Yu, Kyungho
Kim, Hyoungju
Kim, Jeongin
Chun, Chanjun
Kim, Pankoo
Source :
Electronics (2079-9292); Sep2023, Vol. 12 Issue 18, p3983, 12p
Publication Year :
2023

Abstract

This study focuses on harnessing deep-learning-based text-to-image transformation techniques to help webtoon creators' creative outputs. We converted publicly available datasets (e.g., MSCOCO) into a multimodal webtoon dataset using CartoonGAN. First, the dataset was leveraged for training contrastive language image pre-training (CLIP), a model composed of multi-lingual BERT and a Vision Transformer that learnt to associate text with images. Second, a pre-trained diffusion model was employed to generate webtoons through text and text-similar image input. The webtoon dataset comprised treatments (i.e., textual descriptions) paired with their corresponding webtoon illustrations. CLIP (operating through contrastive learning) extracted features from different data modalities and aligned similar data more closely within the same feature space while pushing dissimilar data apart. This model learnt the relationships between various modalities in multimodal data. To generate webtoons using the diffusion model, the process involved providing the CLIP features of the desired webtoon's text with those of the most text-similar image to a pre-trained diffusion model. Experiments were conducted using both single- and continuous-text inputs to generate webtoons. In the experiments, both single-text and continuous-text inputs were used to generate webtoons, and the results showed an inception score of 7.14 when using continuous-text inputs. The text-to-image technology developed here could streamline the webtoon creation process for artists by enabling the efficient generation of webtoons based on the provided text. However, the current inability to generate webtoons from multiple sentences or images while maintaining a consistent artistic style was noted. Therefore, further research is imperative to develop a text-to-image model capable of handling multi-sentence and -lingual input while ensuring coherence in the artistic style across the generated webtoon images. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
TRANSFORMER models
ARTISTIC style

Details

Language :
English
ISSN :
20799292
Volume :
12
Issue :
18
Database :
Complementary Index
Journal :
Electronics (2079-9292)
Publication Type :
Academic Journal
Accession number :
172414352
Full Text :
https://doi.org/10.3390/electronics12183983