Back to Search Start Over

Stacked Multimodal Attention Network for Context-Aware Video Captioning.

Authors :
Zheng, Yi
Zhang, Yuejie
Feng, Rui
Zhang, Tao
Fan, Weiguo
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Jan2022, Vol. 32 Issue 1, p31-42. 12p.
Publication Year :
2022

Abstract

Recent neural models for video captioning usually employ an attention-based encoder-decoder framework. However, current approaches mainly attend to the motion features and object features of the video when generating the caption, but ignore the potential but useful historical information. Besides, exposure bias and vanishing gradients problems always exist in current caption generation models. In this paper, we propose a novel video captioning framework, named Stacked Multimodal Attention Network (SMAN). It adopts additional visual and textual historical information during caption generation as context features, employs a stacked architecture to process different features gradually, and utilizes the Reinforcement Learning method and coarse-to-fine training strategy to further improve the generated results. Both quantitative and qualitative experiments on the benchmark datasets of MSVD and MSR-VTT show the effectiveness and feasibility of our framework. The codes are available on https://github.com/zhengyi123456/SMAN. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
32
Issue :
1
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
154763654
Full Text :
https://doi.org/10.1109/TCSVT.2021.3058626