Back to Search Start Over

Mono‐MVS: textureless‐aware multi‐view stereo assisted by monocular prediction.

Authors :
Fu, Yuanhao
Zheng, Maoteng
Chen, Peiyu
Liu, Xiuguo
Source :
Photogrammetric Record; Mar2024, Vol. 39 Issue 185, p183-204, 22p
Publication Year :
2024

Abstract

The learning‐based multi‐view stereo (MVS) methods have made remarkable progress in recent years. However, these methods exhibit limited robustness when faced with occlusion, weak or repetitive texture regions in the image. These factors often lead to holes in the final point cloud model due to excessive pixel‐matching errors. To address these challenges, we propose a novel MVS network assisted by monocular prediction for 3D reconstruction. Our approach combines the strengths of both monocular and multi‐view branches, leveraging the internal semantic information extracted from a single image through monocular prediction, along with the strict geometric relationships between multiple images. Moreover, we adopt a coarse‐to‐fine strategy to gradually reduce the number of assumed depth planes and minimise the interval between them as the resolution of the input images increases during the network iteration. This strategy can achieve a balance between the computational resource consumption and the effectiveness of the model. Experiments on the DTU, Tanks and Temples, and BlendedMVS datasets demonstrate that our method achieves outstanding results, particularly in textureless regions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0031868X
Volume :
39
Issue :
185
Database :
Complementary Index
Journal :
Photogrammetric Record
Publication Type :
Academic Journal
Accession number :
176273851
Full Text :
https://doi.org/10.1111/phor.12480