Back to Search Start Over

Enhancing Multimodal Large Language Models with Vision Detection Models: An Empirical Study

Authors :
Jiao, Qirui
Chen, Daoyuan
Huang, Yilun
Li, Yaliang
Shen, Ying
Publication Year :
2024

Abstract

Despite the impressive capabilities of Multimodal Large Language Models (MLLMs) in integrating text and image modalities, challenges remain in accurately interpreting detailed visual elements. This paper presents an empirical study on enhancing MLLMs with state-of-the-art (SOTA) object detection and Optical Character Recognition (OCR) models to improve fine-grained understanding and reduce hallucination in responses. We investigate the embedding-based infusion of textual detection information, the impact of such infusion on MLLMs' original abilities, and the interchangeability of detection models. We conduct systematic and extensive experiments with representative models such as LLaVA-1.5, DINO, PaddleOCRv2, and Grounding DINO, revealing that our simple yet general approach not only refines MLLMs' performance in fine-grained visual tasks but also maintains their original strengths. Notably, the enhanced LLaVA-1.5 outperforms its original 7B/13B models on all 10 benchmarks, achieving an improvement of up to 12.5% on the normalized average score. We release our codes to facilitate further exploration into the fine-grained multimodal capabilities of MLLMs.<br />Comment: 25 pages, 18 tables, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.17981
Document Type :
Working Paper