Back to Search
Start Over
SIR-Net: Self-Supervised Transfer for Inverse Rendering via Deep Feature Fusion and Transformation From a Single Image
- Source :
- IEEE Access, Vol 8, Pp 201861-201873 (2020)
- Publication Year :
- 2020
- Publisher :
- IEEE, 2020.
-
Abstract
- Measuring the material, geometry, and ambient lighting of surfaces is a key technology in the object's appearance reconstruction. In this article, we propose a novel deep learning-based method to extract such information to reconstruct the object's appearance from an RGB image. Firstly, we design new deep convolutional neural network architectures to improve the performance by fusing complementary features from hierarchical layers and different tasks. Then we generate a synthetic dataset to train the proposed model to tackle the problem of the absence of the ground-truth. To transfer the domain from the synthetic data to the specific real image, we introduce a self-supervised test-time training strategy to finetune the trained model. The proposed architecture only requires one image as input when inferring the material, geometry, and ambient lighting. The experiments are conducted to evaluate the proposed method on both the synthetic data and real data. The results show that our trained model outperforms the existing baselines in each task and presents obvious improvement in final appearance reconstruction, which verifies the effectiveness of the proposed methods.
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 8
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.5002408d6eac4f39ba38c0c975f87e46
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2020.3035213