Back to Search Start Over

Localizing Events in Videos with Multimodal Queries

Authors :
Zhang, Gengyuan
Fok, Mang Ling Ada
Xia, Yan
Tang, Yansong
Cremers, Daniel
Torr, Philip
Tresp, Volker
Gu, Jindong
Publication Year :
2024

Abstract

Video understanding is a pivotal task in the digital era, yet the dynamic and multievent nature of videos makes them labor-intensive and computationally demanding to process. Thus, localizing a specific event given a semantic query has gained importance in both user-oriented applications like video search and academic research into video foundation models. A significant limitation in current research is that semantic queries are typically in natural language that depicts the semantics of the target event. This setting overlooks the potential for multimodal semantic queries composed of images and texts. To address this gap, we introduce a new benchmark, ICQ, for localizing events in videos with multimodal queries, along with a new evaluation dataset ICQ-Highlight. Our new benchmark aims to evaluate how well models can localize an event given a multimodal semantic query that consists of a reference image, which depicts the event, and a refinement text to adjust the images' semantics. To systematically benchmark model performance, we include 4 styles of reference images and 5 types of refinement texts, allowing us to explore model performance across different domains. We propose 3 adaptation methods that tailor existing models to our new setting and evaluate 10 SOTA models, ranging from specialized to large-scale foundation models. We believe this benchmark is an initial step toward investigating multimodal queries in video event localization.<br />Comment: 9 pages; fix some typos

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.10079
Document Type :
Working Paper