1. Depth super-resolution from explicit and implicit high-frequency features.
- Author
-
Qiao, Xin, Ge, Chenyang, Zhang, Youmin, Zhou, Yanhui, Tosi, Fabio, Poggi, Matteo, and Mattoccia, Stefano
- Subjects
TRANSFORMER models ,FEATURE extraction - Abstract
Guided depth super-resolution aims at using a low-resolution depth map and an associated high-resolution RGB image to recover a high-resolution depth map. However, restoring precise and sharp edges near depth discontinuities and fine structures is still challenging for state-of-the-art methods. To alleviate this issue, we propose a novel multi-stage depth super-resolution network, which progressively reconstructs HR depth maps from explicit and implicit high-frequency information. We introduce an efficient transformer to obtain explicit high-frequency information. The shape bias and global context of the transformer allow our model to focus on high-frequency details between objects, i.e., depth discontinuities, rather than texture within objects. Furthermore, we project the input color images into the frequency domain for additional implicit high-frequency cues extraction. Finally, to incorporate the structural details, we develop a fusion strategy that combines depth features and high-frequency information in the multi-stage-scale framework. Exhaustive experiments on the main benchmarks show that our approach establishes a new state-of-the-art. Code will be publicly available at https://github.com/wudiqx106/DSR-EI. • DSR-EI employs an efficient transformer for explicit, HF feature extraction. • We propose LCF that can obtain accurate, implicit HF information. • We propose AFFM to counter the information loss issue. • DSR-EI outperforms other SoTA methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF