Back to Search Start Over

DepthLab: From Partial to Complete

Authors :
Liu, Zhiheng
Cheng, Ka Leong
Wang, Qiuyu
Wang, Shuzhe
Ouyang, Hao
Tan, Bin
Zhu, Kai
Shen, Yujun
Chen, Qifeng
Luo, Ping
Publication Year :
2024

Abstract

Missing values remain a common challenge for depth data across its wide range of applications, stemming from various causes like incomplete data acquisition and perspective alteration. This work bridges this gap with DepthLab, a foundation depth inpainting model powered by image diffusion priors. Our model features two notable strengths: (1) it demonstrates resilience to depth-deficient regions, providing reliable completion for both continuous areas and isolated points, and (2) it faithfully preserves scale consistency with the conditioned known depth when filling in missing values. Drawing on these advantages, our approach proves its worth in various downstream tasks, including 3D scene inpainting, text-to-3D scene generation, sparse-view reconstruction with DUST3R, and LiDAR depth completion, exceeding current solutions in both numerical performance and visual quality. Our project page with source code is available at https://johanan528.github.io/depthlab_web/.<br />Comment: Project page and code: https://johanan528.github.io/depthlab_web/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.18153
Document Type :
Working Paper