Back to Search Start Over

How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios

Authors :
Mazeika, Mantas
Tang, Eric
Zou, Andy
Basart, Steven
Chan, Jun Shern
Song, Dawn
Forsyth, David
Steinhardt, Jacob
Hendrycks, Dan
Publication Year :
2022

Abstract

In recent years, deep neural networks have demonstrated increasingly strong abilities to recognize objects and activities in videos. However, as video understanding becomes widely used in real-world applications, a key consideration is developing human-centric systems that understand not only the content of the video but also how it would affect the wellbeing and emotional state of viewers. To facilitate research in this setting, we introduce two large-scale datasets with over 60,000 videos manually annotated for emotional response and subjective wellbeing. The Video Cognitive Empathy (VCE) dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states. The Video to Valence (V2V) dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing. In experiments, we show how video models that are primarily trained to recognize actions and find contours of objects can be repurposed to understand human preferences and the emotional content of videos. Although there is room for improvement, predicting wellbeing and emotional response is on the horizon for state-of-the-art models. We hope our datasets can help foster further advances at the intersection of commonsense video understanding and human preference learning.<br />Comment: NeurIPS 2022; datasets available at https://github.com/hendrycks/emodiversity/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.10039
Document Type :
Working Paper