1. MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
- Author
-
Ying, Kaining, Meng, Fanqing, Wang, Jin, Li, Zhiqian, Lin, Han, Yang, Yue, Zhang, Hao, Zhang, Wenbo, Lin, Yuqi, Liu, Shuo, Lei, Jiayi, Lu, Quanfeng, Chen, Runjian, Xu, Peng, Zhang, Renrui, Zhang, Haozhe, Gao, Peng, Wang, Yali, Qiao, Yu, Luo, Ping, Zhang, Kaipeng, and Shao, Wenqi
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Large Vision-Language Models (LVLMs) show significant strides in general-purpose multimodal applications such as visual dialogue and embodied navigation. However, existing multimodal evaluation benchmarks cover a limited number of multimodal tasks testing rudimentary capabilities, falling short in tracking LVLM development. In this study, we present MMT-Bench, a comprehensive benchmark designed to assess LVLMs across massive multimodal tasks requiring expert knowledge and deliberate visual recognition, localization, reasoning, and planning. MMT-Bench comprises $31,325$ meticulously curated multi-choice visual questions from various multimodal scenarios such as vehicle driving and embodied navigation, covering $32$ core meta-tasks and $162$ subtasks in multimodal understanding. Due to its extensive task coverage, MMT-Bench enables the evaluation of LVLMs using a task map, facilitating the discovery of in- and out-of-domain tasks. Evaluation results involving $30$ LVLMs such as the proprietary GPT-4V, GeminiProVision, and open-sourced InternVL-Chat, underscore the significant challenges posed by MMT-Bench. We anticipate that MMT-Bench will inspire the community to develop next-generation multimodal foundation models aimed at achieving general-purpose multimodal intelligence., Comment: 77 pages, 41 figures
- Published
- 2024