Back to Search Start Over

Font Completion and Manipulation by Cycling Between Multi-Modality Representations

Authors :
Yuan, Ye
Chen, Wuyang
Wang, Zhaowen
Fisher, Matthew
Zhang, Zhifei
Wang, Zhangyang
Jin, Hailin
Publication Year :
2021

Abstract

Generating font glyphs of consistent style from one or a few reference glyphs, i.e., font completion, is an important task in topographical design. As the problem is more well-defined than general image style transfer tasks, thus it has received interest from both vision and machine learning communities. Existing approaches address this problem as a direct image-to-image translation task. In this work, we innovate to explore the generation of font glyphs as 2D graphic objects with the graph as an intermediate representation, so that more intrinsic graphic properties of font styles can be captured. Specifically, we formulate a cross-modality cycled image-to-image model structure with a graph constructor between an image encoder and an image renderer. The novel graph constructor maps a glyph's latent code to its graph representation that matches expert knowledge, which is trained to help the translation task. Our model generates improved results than both image-to-image baseline and previous state-of-the-art methods for glyph completion. Furthermore, the graph representation output by our model also provides an intuitive interface for users to do local editing and manipulation. Our proposed cross-modality cycled representation learning has the potential to be applied to other domains with prior knowledge from different data modalities. Our code is available at https://github.com/VITA-Group/Font_Completion_Graph.<br />Comment: submitted to IEEE Transactions on Multimedia (TMM)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2108.12965
Document Type :
Working Paper