Back to Search Start Over

PerlDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models

Authors :
Zhang, Jinhua
Sheng, Hualian
Cai, Sijia
Deng, Bing
Liang, Qiao
Li, Wen
Fu, Ying
Ye, Jieping
Gu, Shuhang
Publication Year :
2024

Abstract

Controllable generation is considered a potentially vital approach to address the challenge of annotating 3D data, and the precision of such controllable generation becomes particularly imperative in the context of data production for autonomous driving. Existing methods focus on the integration of diverse generative information into controlling inputs, utilizing frameworks such as GLIGEN or ControlNet, to produce commendable outcomes in controllable generation. However, such approaches intrinsically restrict generation performance to the learning capacities of predefined network architectures. In this paper, we explore the integration of controlling information and introduce PerlDiff (Perspective-Layout Diffusion Models), a method for effective street view image generation that fully leverages perspective 3D geometric information. Our PerlDiff employs 3D geometric priors to guide the generation of street view images with precise object-level control within the network learning process, resulting in a more robust and controllable output. Moreover, it demonstrates superior controllability compared to alternative layout control methods. Empirical results justify that our PerlDiff markedly enhances the precision of generation on the NuScenes and KITTI datasets. Our codes and models are publicly available at https://github.com/LabShuHangGU/PerlDiff.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.06109
Document Type :
Working Paper