Back to Search
Start Over
DiMBERT: Learning Vision-Language Grounded Representations with Disentangled Multimodal-Attention
- Source :
- ACM Transactions on Knowledge Discovery from Data. 16:1-19
- Publication Year :
- 2021
- Publisher :
- Association for Computing Machinery (ACM), 2021.
-
Abstract
- Vision-and-language (V-L) tasks require the system to understand both vision content and natural language, thus learning fine-grained joint representations of vision and language (a.k.a. V-L representations) is of paramount importance. Recently, various pre-trained V-L models are proposed to learn V-L representations and achieve improved results in many tasks. However, the mainstream models process both vision and language inputs with the same set of attention matrices. As a result, the generated V-L representations are entangled in one common latent space. To tackle this problem, we propose DiMBERT (short for Disentangled Multimodal-Attention BERT), which is a novel framework that applies separated attention spaces for vision and language, and the representations of multi-modalities can thus be disentangled explicitly. To enhance the correlation between vision and language in disentangled spaces, we introduce the visual concepts to DiMBERT which represent visual information in textual format. In this manner, visual concepts help to bridge the gap between the two modalities. We pre-train DiMBERT on a large amount of image-sentence pairs on two tasks: bidirectional language modeling and sequence-to-sequence language modeling. After pre-train, DiMBERT is further fine-tuned for the downstream tasks. Experiments show that DiMBERT sets new state-of-the-art performance on three tasks (over four datasets), including both generation tasks (image captioning and visual storytelling) and classification tasks (referring expressions). The proposed DiM (short for Disentangled Multimodal-Attention) module can be easily incorporated into existing pre-trained V-L models to boost their performance, up to a 5% increase on the representative task. Finally, we conduct a systematic analysis and demonstrate the effectiveness of our DiM and the introduced visual concepts.<br />Published in ACM TKDD2022 (ACM Transactions on Knowledge Discovery from Data)
- Subjects :
- FOS: Computer and information sciences
Closed captioning
Computer Science - Computation and Language
Modalities
General Computer Science
Computer science
Process (engineering)
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
02 engineering and technology
Space (commercial competition)
03 medical and health sciences
Task (computing)
0302 clinical medicine
Human–computer interaction
030221 ophthalmology & optometry
0202 electrical engineering, electronic engineering, information engineering
020201 artificial intelligence & image processing
Language model
Set (psychology)
Computation and Language (cs.CL)
Natural language
Subjects
Details
- ISSN :
- 1556472X and 15564681
- Volume :
- 16
- Database :
- OpenAIRE
- Journal :
- ACM Transactions on Knowledge Discovery from Data
- Accession number :
- edsair.doi.dedup.....b3d89fca9ea1dea51ed666b2fb5a4f6d