Back to Search Start Over

Multimodal Image Fusion based on Hybrid CNN-Transformer and Non-local Cross-modal Attention

Authors :
Yuan, Yu
Wu, Jiaqi
Jing, Zhongliang
Leung, Henry
Pan, Han
Publication Year :
2022

Abstract

The fusion of images taken by heterogeneous sensors helps to enrich the information and improve the quality of imaging. In this article, we present a hybrid model consisting of a convolutional encoder and a Transformer-based decoder to fuse multimodal images. In the encoder, a non-local cross-modal attention block is proposed to capture both local and global dependencies of multiple source images. A branch fusion module is designed to adaptively fuse the features of the two branches. We embed a Transformer module with linear complexity in the decoder to enhance the reconstruction capability of the proposed network. Qualitative and quantitative experiments demonstrate the effectiveness of the proposed method by comparing it with existing state-of-the-art fusion models. The source code of our work is available at https://github.com/pandayuanyu/HCFusion.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.09847
Document Type :
Working Paper