Back to Search Start Over

MultiChartQA: Benchmarking Vision-Language Models on Multi-Chart Problems

Authors :
Zhu, Zifeng
Jia, Mengzhao
Zhang, Zhihan
Li, Lang
Jiang, Meng
Publication Year :
2024

Abstract

Multimodal Large Language Models (MLLMs) have demonstrated impressive abilities across various tasks, including visual question answering and chart comprehension, yet existing benchmarks for chart-related tasks fall short in capturing the complexity of real-world multi-chart scenarios. Current benchmarks primarily focus on single-chart tasks, neglecting the multi-hop reasoning required to extract and integrate information from multiple charts, which is essential in practical applications. To fill this gap, we introduce MultiChartQA, a benchmark that evaluates MLLMs' capabilities in four key areas: direct question answering, parallel question answering, comparative reasoning, and sequential reasoning. Our evaluation of a wide range of MLLMs reveals significant performance gaps compared to humans. These results highlight the challenges in multi-chart comprehension and the potential of MultiChartQA to drive advancements in this field. Our code and data are available at https://github.com/Zivenzhu/Multi-chart-QA<br />Comment: 18 pages, 9 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.14179
Document Type :
Working Paper