Back to Search Start Over

Generating Event-oriented Attribution for Movies via Two-Stage Prefix-Enhanced Multimodal LLM

Authors :
Lyu, Yuanjie
Xu, Tong
Niu, Zihan
Peng, Bo
Ke, Jing
Chen, Enhong
Publication Year :
2024

Abstract

The prosperity of social media platforms has raised the urgent demand for semantic-rich services, e.g., event and storyline attribution. However, most existing research focuses on clip-level event understanding, primarily through basic captioning tasks, without analyzing the causes of events across an entire movie. This is a significant challenge, as even advanced multimodal large language models (MLLMs) struggle with extensive multimodal information due to limited context length. To address this issue, we propose a Two-Stage Prefix-Enhanced MLLM (TSPE) approach for event attribution, i.e., connecting associated events with their causal semantics, in movie videos. In the local stage, we introduce an interaction-aware prefix that guides the model to focus on the relevant multimodal information within a single clip, briefly summarizing the single event. Correspondingly, in the global stage, we strengthen the connections between associated events using an inferential knowledge graph, and design an event-aware prefix that directs the model to focus on associated events rather than all preceding clips, resulting in accurate event attribution. Comprehensive evaluations of two real-world datasets demonstrate that our framework outperforms state-of-the-art methods.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.09362
Document Type :
Working Paper