Back to Search Start Over

RMT-BVQA: Recurrent Memory Transformer-based Blind Video Quality Assessment for Enhanced Video Content

Authors :
Peng, Tianhao
Feng, Chen
Danier, Duolikun
Zhang, Fan
Vallade, Benoit
Mackin, Alex
Bull, David
Publication Year :
2024

Abstract

With recent advances in deep learning, numerous algorithms have been developed to enhance video quality, reduce visual artifacts, and improve perceptual quality. However, little research has been reported on the quality assessment of enhanced content - the evaluation of enhancement methods is often based on quality metrics that were designed for compression applications. In this paper, we propose a novel blind deep video quality assessment (VQA) method specifically for enhanced video content. It employs a new Recurrent Memory Transformer (RMT) based network architecture to obtain video quality representations, which is optimized through a novel content-quality-aware contrastive learning strategy based on a new database containing 13K training patches with enhanced content. The extracted quality representations are then combined through linear regression to generate video-level quality indices. The proposed method, RMT-BVQA, has been evaluated on the VDPVE (VQA Dataset for Perceptual Video Enhancement) database through a five-fold cross validation. The results show its superior correlation performance when compared to ten existing no-reference quality metrics.<br />Comment: This paper has been accepted by the ECCV 2024 AIM Advances in Image Manipulation workshop

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.08621
Document Type :
Working Paper