Back to Search Start Over

RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions

Authors :
Zeng, Ziyao
Wu, Yangchao
Park, Hyoungseob
Wang, Daniel
Yang, Fengyu
Soatto, Stefano
Lao, Dong
Hong, Byung-Woo
Wong, Alex
Publication Year :
2024

Abstract

We propose a method for metric-scale monocular depth estimation. Inferring depth from a single image is an ill-posed problem due to the loss of scale from perspective projection during the image formation process. Any scale chosen is a bias, typically stemming from training on a dataset; hence, existing works have instead opted to use relative (normalized, inverse) depth. Our goal is to recover metric-scaled depth maps through a linear transformation. The crux of our method lies in the observation that certain objects (e.g., cars, trees, street signs) are typically found or associated with certain types of scenes (e.g., outdoor). We explore whether language descriptions can be used to transform relative depth predictions to those in metric scale. Our method, RSA, takes as input a text caption describing objects present in an image and outputs the parameters of a linear transformation which can be applied globally to a relative depth map to yield metric-scaled depth predictions. We demonstrate our method on recent general-purpose monocular depth models on indoors (NYUv2, VOID) and outdoors (KITTI). When trained on multiple datasets, RSA can serve as a general alignment module in zero-shot settings. Our method improves over common practices in aligning relative to metric depth and results in predictions that are comparable to an upper bound of fitting relative depth to ground truth via a linear transformation.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.02924
Document Type :
Working Paper