Back to Search Start Over

RDNeRF: relative depth guided NeRF for dense free view synthesis.

Authors :
Qiu, Jiaxiong
Zhu, Yifan
Jiang, Peng-Tao
Cheng, Ming-Ming
Ren, Bo
Source :
Visual Computer; Mar2024, Vol. 40 Issue 3, p1485-1497, 13p
Publication Year :
2024

Abstract

In this paper, we focus on dense view synthesis with free movements in indoor scenes for better user interactions than sparse views. Neural radiance field (NeRF) handles sparsely and spherically captured scenes well, while it struggles in scenes with dense free views. We extend NeRF to handle these views of indoor scenes. We present a learning-based approach named relative depth guided NeRF (RDNeRF), which jointly renders RGB images and recovers scene geometry in dense free views. To recover the geometry of each view without the ground-truth depth, we propose to directly learn the relative depth by implicit functions and transform it as a geometric volume bound for geometry-aware sampling and integration of NeRF. With correct scene geometry, we further model the implicit internal relevance of inputs to enhance the representation ability of NeRF in dense free views. We conduct extensive experiments in indoor scenes for dense free view synthesis. RDNeRF outperforms current state-of-the-art methods and achieves 24.95 PSNR score and 0.77 SSIM score. Besides, it recovers more accurate geometry than basic models. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
RADIANCE
GEOMETRY

Details

Language :
English
ISSN :
01782789
Volume :
40
Issue :
3
Database :
Complementary Index
Journal :
Visual Computer
Publication Type :
Academic Journal
Accession number :
175459318
Full Text :
https://doi.org/10.1007/s00371-023-02863-5