Back to Search Start Over

CSRNet: Focusing on critical points for depth completion.

Authors :
Li, Bocen
Wang, Yifan
Wang, Lijun
Lu, Huchuan
Source :
Image & Vision Computing. Jul2024, Vol. 147, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Depth completion is an effective method for generating dense depth maps from sparse ones. In recent studies, the majority of points, which we call standard points, often exhibit sub-optimal performance. This issue arises from the need to fit only very few points, termed as challenging points, which consist of noises and regions with discontinuous depth in the ground-truth. On the other hand, traditional evaluations can not recognize this situation, and are dominated by these limited challenging points, whose performance improvements may not significantly benefit related tasks. In contrast, standard points, which are critical for these tasks, are not effectively measured. This discrepancy highlights the need for a more targeted approach and evaluation method for depth completion. In order to solve the above problems, we propose a standard-point-enhancing learning paradigm. This paradigm aims to improve the performance on standard points, which consists of a Cascaded Segmentation-to-Regression Networks (CSRNet) and a Mining L1 loss. CSRNet includes two branches: DSNet and DRNet. DSNet uses segmentation to generate a coarse depth map, providing challenging-point-insensitive information. DRNet adopts a coarse-to-fine approach to learn the residual depth map between the coarse depth map and the ground-truth depth map. In addition, our Mining L1 loss leverages the segmentation results to filter out potential challenging points. This approach allows the network to concentrate more effectively on standard points. Lastly, we introduce the Minimum Error (ME) Curves as a new way to measure the performance of predicted depth maps in a flexible and comprehensive manner, irrespective of whether the points are standard or challenging. Experimental results on the KITTI and NYUDv2 datasets show that our approach significantly improves accuracy on the majority of points. • Fitting few points with large errors may not benefit most points. • Fitting few points with large errors easily leads to distortion in 3D space. • Focusing on standard points improves 2D boundaries and 3D geometry. • Flexible evaluation method for more comprehensive performance assessments. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02628856
Volume :
147
Database :
Academic Search Index
Journal :
Image & Vision Computing
Publication Type :
Academic Journal
Accession number :
177869601
Full Text :
https://doi.org/10.1016/j.imavis.2024.105051