Back to Search Start Over

VisualRWKV: Exploring Recurrent Neural Networks for Visual Language Models

Authors :
Hou, Haowen
Zeng, Peigen
Ma, Fei
Yu, Fei Richard
Publication Year :
2024

Abstract

Visual Language Models (VLMs) have rapidly progressed with the recent success of large language models. However, there have been few attempts to incorporate efficient linear Recurrent Neural Networks (RNNs) architectures into VLMs. In this study, we introduce VisualRWKV, the first application of a linear RNN model to multimodal learning tasks, leveraging the pre-trained RWKV language model. We propose a data-dependent recurrence and sandwich prompts to enhance our modeling capabilities, along with a 2D image scanning mechanism to enrich the processing of visual sequences. Extensive experiments demonstrate that VisualRWKV achieves competitive performance compared to Transformer-based models like LLaVA-1.5 on various benchmarks. To facilitate further research and analysis, we have made the checkpoints and the associated code publicly accessible at the following GitHub repository: \href{https://github.com/howard-hou/VisualRWKV}{https://github.com/howard-hou/VisualRWKV}.<br />Comment: 18 pages,14 tables,6 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.13362
Document Type :
Working Paper