Back to Search Start Over

Deep Learning-Based Perceptual Video Quality Enhancement for 3D Synthesized View.

Authors :
Zhang, Huan
Zhang, Yun
Zhu, Linwei
Lin, Weisi
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Aug2022, Vol. 32 Issue 8, p5080-5094. 15p.
Publication Year :
2022

Abstract

Due to occlusion among views and temporal inconsistency in depth video, spatio-temporal distortion occurs in 3D synthesized video with depth image-based rendering. In this paper, we propose a deep Convolutional Neural Network (CNN)-based synthesized video denoising algorithm to reduce temporal flicker distortion and improve perceptual quality of 3D synthesized video. First, we analyze the spatio-temporal distortion, and model eliminating spatio-temporal distortion as a perceptual video denoising problem. Then, a deep learning-based synthesized video denoising network is proposed, in which a CNN-friendly spatio-temporal loss function is derived from a synthesized video quality metric and integrated with a single image denoising network architecture. Finally, specific schemes, i.e., specific Synthesized Video Denoising Networks (SynVD-Nets), and a general scheme, i.e., General SynVD-Net (GSynVD-Net), based on existing CNN-based denoising models, are developed to handle synthesized video with different distortion levels more effectively. Experimental results show that the proposed SynVD-Net and GSynVD-Net can outperform deep learning-based counterparts and conventional denoising methods, and significantly enhance perceptual quality of 3D synthesized video. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
32
Issue :
8
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
158333574
Full Text :
https://doi.org/10.1109/TCSVT.2022.3147788