Back to Search
Start Over
Image and Video Style Transfer Based on Transformer
- Source :
- IEEE Access, Vol 11, Pp 56400-56407 (2023)
- Publication Year :
- 2023
- Publisher :
- IEEE, 2023.
-
Abstract
- The essence of image style transfer is to generate images that both maintain in the original content image and present the effect with artistic features under the guidance of style images. Deep learning’s quick rise has resulted in even more achievements in image style transfer, an otherwise popular study area. Nevertheless, due to the limitations of Convolutional Neural Networks (CNN), extracting and retaining the input images is problematic. Therefore, image style transfer based on traditional CNN is biased in the representation of content images. To address the above problems, this paper proposes STLTSF (Style Transfer based on Transformer), a transformer-based method that may achieve image style transfer based on the long-range dependencies of the input images. Compared with traditional visual transformers, STLTSF has two different transformer encoders, one for generating domain-specific content and the other for generating styles. We may convert the encoder to a decoding method that can be styled based on content sequences by using numerous layers of transformers. The suggested STLTSF approach outperforms traditional CNN-based methods in both qualitative and quantitative experiments.
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 11
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.425688a033b1402f887fd15df5b63205
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2023.3283260