Back to Search Start Over

Unsupervised Single-View Synthesis Network via Style Guidance and Prior Distillation

Authors :
Liu, Bingzheng
Peng, Bo
Zhang, Zhe
Huang, Qingming
Ling, Nam
Lei, Jianjun
Source :
IEEE Transactions on Circuits and Systems for Video Technology; 2024, Vol. 34 Issue: 3 p1604-1614, 11p
Publication Year :
2024

Abstract

View synthesis aims to learn a view transformation and synthesize the target views from a single or multiple source views. Although previous view synthesis methods have obtained promising performance, they heavily rely on the supervision of the target view. In this paper, we propose an unsupervised single-view synthesis network (USVS-Net) to learn the view transformation without the supervision of the target view. Specifically, with the usage of only a single source view, a style-guidance view synthesis model is proposed to learn an intrinsic representation, which intends to describe the object from a reference pose. With the intrinsic representation, the view transformation is learned to boost the learning of the unsupervised single-view synthesis. Then, taking the style-guidance view synthesis model as the teacher, a prior-distillation view synthesis model is further presented as the student to learn a more direct view transformation. By utilizing the proposed method, high-quality target views are synthesized in a time-efficient manner. Experiments on both synthetic and real-scene datasets show that despite the lack of supervision of the target view, the proposed method achieves promising results compared with the existing view synthesis methods.

Details

Language :
English
ISSN :
10518215 and 15582205
Volume :
34
Issue :
3
Database :
Supplemental Index
Journal :
IEEE Transactions on Circuits and Systems for Video Technology
Publication Type :
Periodical
Accession number :
ejs65710705
Full Text :
https://doi.org/10.1109/TCSVT.2023.3294521