Back to Search Start Over

High-resolution optical flow and frame-recurrent network for video super-resolution and deblurring.

Authors :
Fang, Ning
Zhan, Zongqian
Source :
Neurocomputing. Jun2022, Vol. 489, p128-138. 11p.
Publication Year :
2022

Abstract

Over the last years, advances in deep learning have brought huge developments to the studying of super-resolution reconstruction. However, most super-resolution methods only deal with simply down-sampled sharp images, which may lose efficacy when encountering severe blur. The severe motion blur caused by the rapid movement of an object or the large shake of the lens is common in video captured by cameras. However, existing super-resolution algorithms often bring a large amount of artifacts and are difficult to achieve satisfactory results when reconstructing such blurred video sequences. In this paper, a novel convolutional neural network is proposed that jointly processes video super-resolution (SR) and deblurring (DB) to deal with severe motion blur and recover sharp high-resolution (HR) frames. In particular, a pyramid optical flow module is introduced to estimate the sharp latent image in the blurred frame and generate HR optical flow in a coarse-to-fine way. Then, the frame-recurrent is used to warp the previous SR frame to achieve motion compensation and make full use of the previous sharp features and temporal information to help restoration of the subsequent frames. Next, to further overcome the destruction caused by motion blur in the final reconstruction, a parallel-fusion module was designed to extract and fuse the SR and DB features, finally reconstructing the output frame. Experimental results obtained in this study confirm that, compared with other advanced SR algorithms, the proposed method is both effective and efficient in dealing with videos that contain real motion blur. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
489
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
157499571
Full Text :
https://doi.org/10.1016/j.neucom.2022.02.067