1. Occlusion-Aware Amodal Depth Estimation for Enhancing 3D Reconstruction From a Single Image
- Author
-
Seong-Uk Jo, Du Yeol Lee, and Chae Eun Rhee
- Subjects
Occlusion ,amodal segmentation ,depth estimation ,3D-reconstruction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In various fields, such as robotics navigation, autonomous driving, and augmented reality, the demand for the reconstructing of three-dimensional (3D) scenes from two-dimensional (2D) images captured by a camera is growing. With advancements in deep learning, monocular depth prediction research has gained momentum, leading to the exploration of 3D reconstruction from a single image. While previous studies have attempted to restore occluded regions by training deep networks on high-resolution 3D data or with jointly learned 3D segmentation, achieving perfect restoration of occluded objects remains challenging. Such 3D mesh generation methods often result in unrealistic interactions with graphic objects, limiting their applicability. To address this, this paper introduces an amodal depth estimation approach to enhance the completeness of 3D reconstruction. By utilizing amodal masks that recover occluded regions, the method predicts the depths of obscured areas. Employing an iterative amodal depth estimation framework allows this approach to work even with scenes containing deep occlusions. Incorporating a spatially-adaptive normalization (SPADE) fusion block within the amodal depth estimation model effectively combines amodal mask features and image features to improve the accuracy of depth estimation for occluded regions. The proposed system exhibits superior performance on occluded region depth estimation tasks compared to conventional depth inpainting networks. Unlike models that explicitly rely on multiple RGB or depth images to handle instances of occlusion, the proposed model implicitly extracts amodal depth information from a single image. Consequently, it significantly enhances the quality of 3D reconstruction even when single images serve as input. The code and data used in the paper are available at https://github.com/Seonguke/Occlusion-aware-Amodal-Depth-Estimation-for-Enhancing-3D-Reconstruction-from-a-Single-Image/ for further research and feedback.
- Published
- 2024
- Full Text
- View/download PDF