Back to Search Start Over

Exploring Visual Relationships via Transformer-based Graphs for Enhanced Image Captioning.

Authors :
Li, Jingyu
Mao, Zhendong
Li, Hao
Chen, Weidong
Zhang, Yongdong
Source :
ACM Transactions on Multimedia Computing, Communications & Applications; May2024, Vol. 20 Issue 5, p1-23, 23p
Publication Year :
2024

Abstract

Image captioning (IC), bringing vision to language, has drawn extensive attention. A crucial aspect of IC is the accurate depiction of visual relations among image objects. Visual relations encompass two primary facets: content relations and structural relations. Content relations, which comprise geometric positions content (i.e., distances and sizes) and semantic interactions content (i.e., actions and possessives), unveil the mutual correlations between objects. In contrast, structural relations pertain to the topological connectivity of object regions. Existing Transformer-based methods typically resort to geometric positions to enhance the visual relations, yet only using the shallow geometric content is unable to precisely cover actional content correlations and structural connection relations. In this article, we adopt a comprehensive perspective to examine the correlations between objects, incorporating both content relations (i.e., geometric and semantic relations) and structural relations, with the aim of generating plausible captions. To achieve this, first, we construct a geometric graph from bounding box features and a semantic graph from the scene graph parser to model the content relations. Innovatively, we construct a topology graph that amalgamates the sparsity characteristics of the geometric and semantic graphs, enabling the representation of image structural relations. Second, we propose a novel unified approach to enrich image relation representations by integrating semantic, geometric, and structural relations into self-attention. Finally, in the language decoding stage, we further leverage the semantic relation as prior knowledge to generate accurate words. Extensive experiments on MS-COCO dataset demonstrate the effectiveness of our model, with improvements of CIDEr from 128.6% to 136.6%. Codes have been released at https://github.com/CrossmodalGroup/ER-SAN/tree/main/VG-Cap. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15516857
Volume :
20
Issue :
5
Database :
Complementary Index
Journal :
ACM Transactions on Multimedia Computing, Communications & Applications
Publication Type :
Academic Journal
Accession number :
175449530
Full Text :
https://doi.org/10.1145/3638558