1. Linear-ResNet GAN-based anime style transfer of face images.
- Author
-
Chen, Mingxi, Dai, Hansen, Wei, Shijie, and Hu, ZhenZhen
- Abstract
Converting directly real-world images into high-quality anime styles using generative adversarial networks is one of the research hotspots in computer vision. The current popular AnimeGAN and WhiteBox anime generative adversarial networks are problematic when distortion of image features, loss of details on lines and textures are concerned. To address these problems, we introduce AnimationGAN. To preserve the details of images, we use the linear bottlenecks in the residual network, what is more, we also employ the hybrid attention mechanism to capture the salient information of images. In addition, we adopt optimized normalizations to improve the accuracy and learning rate of the model. The experimental results show that compared with AnimeGAN and Whitebox, the proposed AnimationGAN has smaller FID to cartoon(61.73), better IS(6.79) and faster network training speed(405 s per epoch). In summary, the generated animation image significantly improves line texture details and image feature retention with much faster network training speed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF