Back to Search Start Over

Annotating, Understanding, and Predicting Long-term Video Memorability

Authors :
Claire-Hélène Demarty
Romain Cohendet
Karthik Yadati
Ngoc Q. K. Duong
Institut de Recherche en Communications et en Cybernétique de Nantes (IRCCyN)
Mines Nantes (Mines Nantes)-École Centrale de Nantes (ECN)-Ecole Polytechnique de l'Université de Nantes (EPUN)
Université de Nantes (UN)-Université de Nantes (UN)-PRES Université Nantes Angers Le Mans (UNAM)-Centre National de la Recherche Scientifique (CNRS)
Technicolor R & I [Cesson Sévigné]
Technicolor
Source :
ICMR '18: 2018 International Conference on Multimedia Retrieval, ICMR '18: 2018 International Conference on Multimedia Retrieval, Jun 2018, Yokohama, Japan. ⟨10.1145/3206025.3206056⟩, ICMR
Publication Year :
2018
Publisher :
HAL CCSD, 2018.

Abstract

International audience; Memorability can be regarded as a useful metric of video importance to help make a choice between competing videos. Research on computational understanding of video memorability is however in its early stages. There is no available dataset for modelling purposes, and the few previous attempts provided protocols to collect video memorability data that would be difficult to generalize. Furthermore, the computational features needed to build a robust memorability predictor remain largely undiscovered. In this article, we propose a new protocol to collect long-term video memorability annotations. We measure the memory performances of 104 participants from weeks to years after memorization to build a dataset of 660 videos for video memorability prediction. This dataset is made available for the research community. We then analyze the collected data in order to better understand video memorability, in particular the effects of response time, duration of memory retention and repetition of visualization on video memorability. We finally investigate the use of various types of audio and visual features and build a computational model for video memorability prediction. We conclude that high level visual semantics help better predict the memorability of videos.

Details

Language :
English
Database :
OpenAIRE
Journal :
ICMR '18: 2018 International Conference on Multimedia Retrieval, ICMR '18: 2018 International Conference on Multimedia Retrieval, Jun 2018, Yokohama, Japan. ⟨10.1145/3206025.3206056⟩, ICMR
Accession number :
edsair.doi.dedup.....df5854d0080e70c40bcd8a461a6aea9f
Full Text :
https://doi.org/10.1145/3206025.3206056⟩