Back to Search Start Over

VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation

Authors :
He, Xuan
Jiang, Dongfu
Zhang, Ge
Ku, Max
Soni, Achint
Siu, Sherman
Chen, Haonan
Chandra, Abhranil
Jiang, Ziyan
Arulraj, Aaran
Wang, Kai
Do, Quy Duc
Ni, Yuansheng
Lyu, Bohan
Narsupalli, Yaswanth
Fan, Rongqi
Lyu, Zhiheng
Lin, Yuchen
Chen, Wenhu
Publication Year :
2024

Abstract

The recent years have witnessed great advances in video generation. However, the development of automatic video metrics is lagging significantly behind. None of the existing metric is able to provide reliable scores over generated videos. The main barrier is the lack of large-scale human-annotated dataset. In this paper, we release VideoFeedback, the first large-scale dataset containing human-provided multi-aspect score over 37.6K synthesized videos from 11 existing video generative models. We train VideoScore (initialized from Mantis) based on VideoFeedback to enable automatic video quality assessment. Experiments show that the Spearman correlation between VideoScore and humans can reach 77.1 on VideoFeedback-test, beating the prior best metrics by about 50 points. Further result on other held-out EvalCrafter, GenAI-Bench, and VBench show that VideoScore has consistently much higher correlation with human judges than other metrics. Due to these results, we believe VideoScore can serve as a great proxy for human raters to (1) rate different video models to track progress (2) simulate fine-grained human feedback in Reinforcement Learning with Human Feedback (RLHF) to improve current video generation models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.15252
Document Type :
Working Paper