1. Learning consensus-aware semantic knowledge for remote sensing image captioning.
- Author
-
Li, Yunpeng, Zhang, Xiangrong, Cheng, Xina, Tang, Xu, and Jiao, Licheng
- Subjects
- *
REMOTE sensing , *LANGUAGE awareness , *MULTIAGENT systems , *NEUROLINGUISTICS , *SEXTING - Abstract
Tremendous progresses have been made in remote sensing image captioning (RSIC) task in recent years, yet there still some unresolved problems: (1) facing the gap between the visual features and semantic concepts, (2) reasoning the higher-level relationships between semantic concepts. In this work, we focus on injecting high-level visual-semantic interaction into RSIC model. Firstly, the semantic concept extractor (SCE), end-to-end trainable, precisely captures the semantic concepts contained in the RSIs. In particular, the visual-semantic co-attention (VSCA) is designed to grain coarse concept-related regions and region-related concepts for multi-modal interaction. Furthermore, we incorporate the two types of attentive vectors with semantic-level relational features into a consensus exploitation (CE) block for learning cross-modal consensus-aware knowledge. The experiments on three benchmark data sets show the superiority of our approach compared with the reference methods. • We propose an end-to-end trainable network architecture for remote sensing image captioning task, aiming to extract explicit high-level semantic from remote sensing image, learn discriminative cross-modal linguistic-aware features, and exploit cross-modal alignment. • To capture the explicit semantic concepts, semantic concept extractor is designed to generate a series of semantic words. The designed network based on visual features and semantic concepts is effective for visual-semantic features extracting. • For modeling the visual features and semantic concepts, a visual-semantic co-attention module with two parallel attentive branches is designed, which learns the linguistic awareness between visual features and enhances the region-related semantic contents. • In order to facilitate the alignment between cross-modal features, a consensus exploitation is devised for exploring the multi-modal consensus-aware representations based on the constructed concept correlation graph. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF