Back to Search Start Over

VLCap: Vision-Language with Contrastive Learning for Coherent Video Paragraph Captioning

Authors :
Yamazaki, Kashu
Truong, Sang
Vo, Khoa
Kidd, Michael
Rainwater, Chase
Luu, Khoa
Le, Ngan
Publication Year :
2022

Abstract

In this paper, we leverage the human perceiving process, that involves vision and language interaction, to generate a coherent paragraph description of untrimmed videos. We propose vision-language (VL) features consisting of two modalities, i.e., (i) vision modality to capture global visual content of the entire scene and (ii) language modality to extract scene elements description of both human and non-human objects (e.g. animals, vehicles, etc), visual and non-visual elements (e.g. relations, activities, etc). Furthermore, we propose to train our proposed VLCap under a contrastive learning VL loss. The experiments and ablation studies on ActivityNet Captions and YouCookII datasets show that our VLCap outperforms existing SOTA methods on both accuracy and diversity metrics.<br />Comment: accepted by The 29th IEEE International Conference on Image Processing (IEEE ICIP) 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2206.12972
Document Type :
Working Paper