Back to Search
Start Over
AMFuse: Add–Multiply-Based Cross-Modal Fusion Network for Multi-Spectral Semantic Segmentation.
- Source :
-
Remote Sensing . Jul2022, Vol. 14 Issue 14, pN.PAG-N.PAG. 19p. - Publication Year :
- 2022
-
Abstract
- Multi-spectral semantic segmentation has shown great advantages under poor illumination conditions, especially for remote scene understanding of autonomous vehicles, since the thermal image can provide complementary information for RGB image. However, methods to fuse the information from RGB image and thermal image are still under-explored. In this paper, we propose a simple but effective module, add–multiply fusion (AMFuse) for RGB and thermal information fusion, consisting of two simple math operations—addition and multiplication. The addition operation focuses on extracting cross-modal complementary features, while the multiplication operation concentrates on the cross-modal common features. Moreover, the attention module and atrous spatial pyramid pooling (ASPP) modules are also incorporated into our proposed AMFuse modules, to enhance the multi-scale context information. Finally, in the UNet-style encoder–decoder framework, the ResNet model is adopted as the encoder. As for the decoder part, the multi-scale information obtained from our proposed AMFuse modules is hierarchically merged layer-by-layer to restore the feature map resolution for semantic segmentation. The experiments of RGBT multi-spectral semantic segmentation and salient object detection demonstrate the effectiveness of our proposed AMFuse module for fusing the RGB and thermal information. [ABSTRACT FROM AUTHOR]
- Subjects :
- *THERMOGRAPHY
*MULTISPECTRAL imaging
*AUTONOMOUS vehicles
Subjects
Details
- Language :
- English
- ISSN :
- 20724292
- Volume :
- 14
- Issue :
- 14
- Database :
- Academic Search Index
- Journal :
- Remote Sensing
- Publication Type :
- Academic Journal
- Accession number :
- 158297649
- Full Text :
- https://doi.org/10.3390/rs14143368