Back to Search Start Over

Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs

Authors :
Caffagni, Davide
Cocchi, Federico
Moratelli, Nicholas
Sarto, Sara
Cornia, Marcella
Baraldi, Lorenzo
Cucchiara, Rita
Publication Year :
2024

Abstract

Multimodal LLMs are the natural evolution of LLMs, and enlarge their capabilities so as to work beyond the pure textual modality. As research is being carried out to design novel architectures and vision-and-language adapters, in this paper we concentrate on endowing such models with the capability of answering questions that require external knowledge. Our approach, termed Wiki-LLaVA, aims at integrating an external knowledge source of multimodal documents, which is accessed through a hierarchical retrieval pipeline. Relevant passages, using this approach, are retrieved from the external knowledge source and employed as additional context for the LLM, augmenting the effectiveness and precision of generated dialogues. We conduct extensive experiments on datasets tailored for visual question answering with external data and demonstrate the appropriateness of our approach.<br />Comment: CVPR 2024 Workshop on What is Next in Multimodal Foundation Models

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.15406
Document Type :
Working Paper