Back to Search Start Over

Hierarchical synchronization with structured multi-granularity interaction for video question answering.

Authors :
Qi, Shanshan
Yang, Luxi
Li, Chunguo
Source :
Neurocomputing. May2024, Vol. 582, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Video Question Answering (VideoQA) requires a thorough comprehension of linguistic and visual modalities. However, recent methods confront two problems: (1) Synchronous modeling of object action and frame scene instead of a step-by-step manner, which can better mine potential semantic attributes of videos, lacks research; (2) The relationship between cross-modal alignments at different granularity of abstraction is not fully utilized. Based on these insights, we propose a novel method named hierarchical synchronization with structured multi-granularity interaction (HSSMI) for VideoQA. First, a hierarchical synchronous reasoning module is put forward to model objects' relations and dynamics and synchronously capture their synergistic influences over time when analyzing whole frames. It is seen as multiple Object ConvLSTMs (O-CLSTMs) in isolation or a Frame ConvLSTM (F-CLSTM) in collectivity. Specifically, O-CLSTM learns the object-level action states under neighboring spatial interplays. Meanwhile, F-CLSTM learns the frame-level scene state, where action information from O-CLSTMs is selectively aggregated into a common memory cell of F-CLSTM as instructed by questions. Besides, a boundary detector is equipped to discover scene discontinuities, enabling F-CLSTM to alter its time connectivity and adapt its sequential encoding process to videos. Thereafter, we develop a conditional VLAD with topic constraints for discriminative modality summarization. Last, a structured multi-granularity interaction module is proposed to integrate complemented clues on the global alignment between scene summary and full question and the local alignments between action summaries and words. This module encourages useful information passing through compositional syntactical topologies of questions to predict answers. Experiments on three public benchmark datasets demonstrate the superiority of our HSSMI against other state-of-the-art methods. Codes will be publicly available at https://github.com/Qiss33/HSSMI. • Propose a fine-grained vision-language interaction network (HSSMI) for VideoQA. • Hierarchical synchronous reasoning for mining latent semantic structures of videos. • Exploit complementarity of global and local alignments by iterative message passing. • Extensive experiments demonstrate the robustness and effectiveness of HSSMI. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
582
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
176406535
Full Text :
https://doi.org/10.1016/j.neucom.2024.127494