Back to Search Start Over

From Specific-MLLM to Omni-MLLM: A Survey about the MLLMs alligned with Multi-Modality

Authors :
Jiang, Shixin
Liang, Jiafeng
Liu, Ming
Qin, Bing
Publication Year :
2024

Abstract

From the Specific-MLLM, which excels in single-modal tasks, to the Omni-MLLM, which extends the range of general modalities, this evolution aims to achieve understanding and generation of multimodal information. Omni-MLLM treats the features of different modalities as different "foreign languages," enabling cross-modal interaction and understanding within a unified space. To promote the advancement of related research, we have compiled 47 relevant papers to provide the community with a comprehensive introduction to Omni-MLLM. We first explain the four core components of Omni-MLLM for unified modeling and interaction of multiple modalities. Next, we introduce the effective integration achieved through "alignment pretraining" and "instruction fine-tuning," and discuss open-source datasets and testing of interaction capabilities. Finally, we summarize the main challenges facing current Omni-MLLM and outline future directions.<br />Comment: 13 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.11694
Document Type :
Working Paper