Back to Search Start Over

MIRFuse: an infrared and visible image fusion model based on disentanglement representation via mutual information regularization.

Authors :
Zhou, Shiliang
Kong, Jun
Jiang, Min
Zhuang, Danfeng
Source :
Journal of Electronic Imaging. Mar2024, Vol. 33 Issue 2, p23005-20. 1p.
Publication Year :
2024

Abstract

In the domain of image fusion, integrating infrared and visible images provides a more complete scene description by merging the unique strengths of each modality. Existing methods struggle with handling the differences between modalities, which is caused by the inherent entanglement of scene-common information and modality-specific information within each modality. In response, we propose MIRFuse, a model for infrared and visible image fusion based on disentanglement representation via mutual information regularization. The process of disentanglement, in which scene-common information and modality-specific information are separated, forms the basis for identifying both shared and exclusive features. First, mutual information maximization is used as consistency constraint, enabling scene-common encoders to effectively extract shared features. Second, the Hilbert–Schmidt independence criterion is employed as heterogeneity constraint, promoting modality-specific encoders to extract exclusive features. Finally, both shared and exclusive features are identified and combined using various fusion strategies to produce a fused image. The resulting fused image provides a comprehensive representation of the entire scene, allowing for more effective utilization of information from multiple modalities. Our experiments have validated the advanced nature and effectiveness of our method. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10179909
Volume :
33
Issue :
2
Database :
Academic Search Index
Journal :
Journal of Electronic Imaging
Publication Type :
Academic Journal
Accession number :
177469092
Full Text :
https://doi.org/10.1117/1.JEI.33.2.023005