Back to Search Start Over

Hierarchical Modular Network for Video Captioning

Authors :
Ye, Hanhua
Li, Guorong
Qi, Yuankai
Wang, Shuhui
Huang, Qingming
Yang, Ming-Hsuan
Publication Year :
2021

Abstract

Video captioning aims to generate natural language descriptions according to the content, where representation learning plays a crucial role. Existing methods are mainly developed within the supervised learning framework via word-by-word comparison of the generated caption against the ground-truth text without fully exploiting linguistic semantics. In this work, we propose a hierarchical modular network to bridge video representations and linguistic semantics from three levels before generating captions. In particular, the hierarchy is composed of: (I) Entity level, which highlights objects that are most likely to be mentioned in captions. (II) Predicate level, which learns the actions conditioned on highlighted objects and is supervised by the predicate in captions. (III) Sentence level, which learns the global semantic representation and is supervised by the whole caption. Each level is implemented by one module. Extensive experimental results show that the proposed method performs favorably against the state-of-the-art models on the two widely-used benchmarks: MSVD 104.0% and MSR-VTT 51.5% in CIDEr score.<br />Comment: Accepted by CVPR 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2111.12476
Document Type :
Working Paper