Back to Search Start Over

Enhancing Cross-Modal Fine-Tuning with Gradually Intermediate Modality Generation

Authors :
Cai, Lincan
Li, Shuang
Ma, Wenxuan
Kang, Jingxuan
Xie, Binhui
Sun, Zixun
Zhu, Chengwei
Publication Year :
2024

Abstract

Large-scale pretrained models have proven immensely valuable in handling data-intensive modalities like text and image. However, fine-tuning these models for certain specialized modalities, such as protein sequence and cosmic ray, poses challenges due to the significant modality discrepancy and scarcity of labeled data. In this paper, we propose an end-to-end method, PaRe, to enhance cross-modal fine-tuning, aiming to transfer a large-scale pretrained model to various target modalities. PaRe employs a gating mechanism to select key patches from both source and target data. Through a modality-agnostic Patch Replacement scheme, these patches are preserved and combined to construct data-rich intermediate modalities ranging from easy to hard. By gradually intermediate modality generation, we can not only effectively bridge the modality gap to enhance stability and transferability of cross-modal fine-tuning, but also address the challenge of limited data in the target modality by leveraging enriched intermediate modality data. Compared with hand-designed, general-purpose, task-specific, and state-of-the-art cross-modal fine-tuning approaches, PaRe demonstrates superior performance across three challenging benchmarks, encompassing more than ten modalities.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.09003
Document Type :
Working Paper