Back to Search Start Over

Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach

Authors :
Wu, Haoning
Zhang, Erli
Liao, Liang
Chen, Chaofeng
Hou, Jingwen
Wang, Annan
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
Wu, Haoning
Zhang, Erli
Liao, Liang
Chen, Chaofeng
Hou, Jingwen
Wang, Annan
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
Publication Year :
2023

Abstract

The proliferation of in-the-wild videos has greatly expanded the Video Quality Assessment (VQA) problem. Unlike early definitions that usually focus on limited distortion types, VQA on in-the-wild videos is especially challenging as it could be affected by complicated factors, including various distortions and diverse contents. Though subjective studies have collected overall quality scores for these videos, how the abstract quality scores relate with specific factors is still obscure, hindering VQA methods from more concrete quality evaluations (e.g. sharpness of a video). To solve this problem, we collect over two million opinions on 4,543 in-the-wild videos on 13 dimensions of quality-related factors, including in-capture authentic distortions (e.g. motion blur, noise, flicker), errors introduced by compression and transmission, and higher-level experiences on semantic contents and aesthetic issues (e.g. composition, camera trajectory), to establish the multi-dimensional Maxwell database. Specifically, we ask the subjects to label among a positive, a negative, and a neutral choice for each dimension. These explanation-level opinions allow us to measure the relationships between specific quality factors and abstract subjective quality ratings, and to benchmark different categories of VQA algorithms on each dimension, so as to more comprehensively analyze their strengths and weaknesses. Furthermore, we propose the MaxVQA, a language-prompted VQA approach that modifies vision-language foundation model CLIP to better capture important quality issues as observed in our analyses. The MaxVQA can jointly evaluate various specific quality factors and final quality scores with state-of-the-art accuracy on all dimensions, and superb generalization ability on existing datasets. Code and data available at https://github.com/VQAssessment/MaxVQA.<br />Comment: Proceedings of the 31st ACM International Conference on Multimedia (MM '23)

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381627778
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1145.3581783.3611737