Back to Search Start Over

LCM-Captioner: A lightweight text-based image captioning method with collaborative mechanism between vision and text.

Authors :
Wang, Qi
Deng, Hongyu
Wu, Xue
Yang, Zhenguo
Liu, Yun
Wang, Yazhou
Hao, Gefei
Source :
Neural Networks. May2023, Vol. 162, p318-329. 12p.
Publication Year :
2023

Abstract

Text-based image captioning (TextCap) aims to remedy the shortcomings of existing image captioning tasks that ignore text content when describing images. Instead, it requires models to recognize and describe images from both visual and textual content to achieve a deeper level of comprehension of the images. However, existing methods tend to use numerous complex network architectures to improve performance, which still fails to adequately model the relationship between vision and text on the one side, while on the other side this leads to long running times, high memory consumption, and other unfavorable deployment problems. To solve the above issues, we have developed a lightweight captioning method with a collaborative mechanism, LCM-Captioner, which balances high efficiency with high performance. First, we propose a feature-lightening transformation for the TextCap task, named TextLighT, which is able to learn rich multimodal representations while mapping features to lower dimensions, thereby reducing memory costs. Next, we present a collaborative attention module for visual and text information, VTCAM, to facilitate the semantic alignment of multimodal information to uncover important visual objects and textual content. Finally, the conducted extensive experiments on the TextCaps dataset demonstrate the effectiveness of our method. Code is available at https://github.com/DengHY258/LCM-Captioner. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*MEMORY
*COST

Details

Language :
English
ISSN :
08936080
Volume :
162
Database :
Academic Search Index
Journal :
Neural Networks
Publication Type :
Academic Journal
Accession number :
163229548
Full Text :
https://doi.org/10.1016/j.neunet.2023.03.010