Back to Search Start Over

DepGAN: Leveraging Depth Maps for Handling Occlusions and Transparency in Image Composition

Authors :
Ghoneim, Amr
Poovvancheri, Jiju
Akiyama, Yasushi
Chen, Dong
Publication Year :
2024

Abstract

Image composition is a complex task which requires a lot of information about the scene for an accurate and realistic composition, such as perspective, lighting, shadows, occlusions, and object interactions. Previous methods have predominantly used 2D information for image composition, neglecting the potentials of 3D spatial information. In this work, we propose DepGAN, a Generative Adversarial Network that utilizes depth maps and alpha channels to rectify inaccurate occlusions and enhance transparency effects in image composition. Central to our network is a novel loss function called Depth Aware Loss which quantifies the pixel wise depth difference to accurately delineate occlusion boundaries while compositing objects at different depth levels. Furthermore, we enhance our network's learning process by utilizing opacity data, enabling it to effectively manage compositions involving transparent and semi-transparent objects. We tested our model against state-of-the-art image composition GANs on benchmark (both real and synthetic) datasets. The results reveal that DepGAN significantly outperforms existing methods in terms of accuracy of object placement semantics, transparency and occlusion handling, both visually and quantitatively. Our code is available at https://amrtsg.github.io/DepGAN/.<br />Comment: 10 pages, 13 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.11890
Document Type :
Working Paper