Back to Search Start Over

Propagating Facial Prior Knowledge for Multitask Learning in Face Super-Resolution.

Authors :
Wang, Chenyang
Jiang, Junjun
Zhong, Zhiwei
Liu, Xianming
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Nov2022, Vol. 32 Issue 11, p7317-7331. 15p.
Publication Year :
2022

Abstract

Existing face hallucination methods always achieve improved performance through regularizing the model with facial prior. Most of them always estimate facial prior information first and then leverage it to help the prediction of the target high-resolution face image. However, the accuracy of prior estimation is difficult to guarantee, especially for the low-resolution face image. Once the estimated prior is inaccurate or wrong, the following face super-resolution performance is unavoidably influenced. A natural question that arises: how to incorporate facial prior effectively and efficiently without prior estimation? To achieve this goal, we propose to learn facial prior knowledge at training stage, but test only with low-resolution face image, which can overcome the difficulty of estimating accurate prior. In addition, instead of estimating facial prior, we directly explore the potential of high-quality facial prior in the training phase and progressively propagate the facial prior knowledge from the teacher network (trained with the low-resolution face/high-quality facial prior and high-resolution face image pairs) to the student network (trained with the low-resolution face and high-resolution face image pairs). Quantitative and qualitative comparisons on benchmark face datasets demonstrate that our method outperforms the state-of-the-art face super-resolution methods. The source codes of the proposed method will be available at https://github.com/wcy-cs/KDFSRNet. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
32
Issue :
11
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
160691249
Full Text :
https://doi.org/10.1109/TCSVT.2022.3181828