Back to Search Start Over

Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution.

Authors :
Lucas, Alice
Lopez-Tapia, Santiago
Molina, Rafael
Katsaggelos, Aggelos K.
Source :
IEEE Transactions on Image Processing; Jul2019, Vol. 28 Issue 7, p3312-3327, 16p
Publication Year :
2019

Abstract

Video super-resolution (VSR) has become one of the most critical problems in video processing. In the deep learning literature, recent works have shown the benefits of using adversarial-based and perceptual losses to improve the performance on various image restoration tasks; however, these have yet to be applied for video super-resolution. In this paper, we propose a generative adversarial network (GAN)-based formulation for VSR. We introduce a new generator network optimized for the VSR problem, named VSRResNet, along with new discriminator architecture to properly guide VSRResNet during the GAN training. We further enhance our VSR GAN formulation with two regularizers, a distance loss in feature-space and pixel-space, to obtain our final VSRResFeatGAN model. We show that pre-training our generator with the mean-squared-error loss only quantitatively surpasses the current state-of-the-art VSR models. Finally, we employ the PercepDist metric to compare the state-of-the-art VSR models. We show that this metric more accurately evaluates the perceptual quality of SR solutions obtained from neural networks, compared with the commonly used PSNR/SSIM metrics. Finally, we show that our proposed model, the VSRResFeatGAN model, outperforms the current state-of-the-art SR models, both quantitatively and qualitatively. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
28
Issue :
7
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
136510107
Full Text :
https://doi.org/10.1109/TIP.2019.2895768