Back to Search
Start Over
Progressive fusion learning: A multimodal joint segmentation framework for building extraction from optical and SAR images.
- Source :
-
ISPRS Journal of Photogrammetry & Remote Sensing . Jan2023, Vol. 195, p178-191. 14p. - Publication Year :
- 2023
-
Abstract
- Automatic and high-precision extraction of buildings from remote sensing images has a wide range of application and importance. Optical and synthetic aperture radar (SAR) images are typical types of multimodal remote sensing data with different imaging methods. To bridge the huge gap between them and achieve high-precision joint semantic segmentation, this study proposes a progressive fusion learning framework. The framework explicitly extracts the shared features (that is, modal invariants) of multimodal images as the information medium and realizes information fusion through multistage learning. Based on this framework, we design a network called the multistage multimodal fusion network (MMFNet), which uses phase as a modal invariant to joint optical and SAR images to achieve high-precision building extraction. We conducted experiments with the Multi-Sensor All-Weather Mapping aerial dataset and the WHU-OPT-SAR_WuHan satellite dataset. This study shows MMFNet has a significant extraction effect and yields more optimized extraction of the edge details of buildings, which is improved by 0.2% to 9.5% compared to other multimodal joint segmentation methods. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 09242716
- Volume :
- 195
- Database :
- Academic Search Index
- Journal :
- ISPRS Journal of Photogrammetry & Remote Sensing
- Publication Type :
- Academic Journal
- Accession number :
- 161277559
- Full Text :
- https://doi.org/10.1016/j.isprsjprs.2022.11.015