1. Dynamic context-driven progressive image inpainting with auxiliary generative units.
- Author
-
Wang, Zhiwen, Li, Kai, and Peng, Jinjia
- Subjects
- *
INPAINTING , *GRAYSCALE model , *SEMANTICS , *PROBABILISTIC generative models - Abstract
Image inpainting aims to restore missing or damaged regions of an image with plausible visual content. Most existing methods always face challenges when dealing with large hole images, such as structural distortion and prominent artifacts, which mainly stem from the absence of adequate semantic guidance. To overcome this problem, this paper proposes a novel progressive image inpainting network driven by dynamic context, where the guided restoration dynamic semantic prior not only includes information from the known region but also considers the semantic inference during the filling process. Specifically, a multi-view cooperative strategy is proposed firstly to predict the inherent semantic information by estimating the distributions of the image, grayscale and edge of the masked image. In addition, to cope with potential semantic changes, an auxiliary generative unit is proposed for learning semantic information of the intermediate inpainting results, where the learned intermediate semantics are fused with the intrinsic semantics in a weighted manner to form a real-time updating dynamic context. Moreover, the dynamic semantic prior is propagated to various stages of inpainting to assist in constructing refined feature maps with multiple levels of resolution. Experiments on CelebA-HQ and Paris street view datasets demonstrate that the proposed approach can recover reasonable structures and realistic textures on large-scale masked images, achieving advanced performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF