1. Refining multi-modal remote sensing image matching with repetitive feature optimization
- Author
-
Yifan Liao, Ke Xi, Huijin Fu, Lai Wei, Shuo Li, Qiang Xiong, Qi Chen, Pengjie Tao, and Tao Ke
- Subjects
Multi-modal remote sensing images ,Matching refinement ,Repetitive feature ,Least-squares matching ,Nonlinear radiometric distortion ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Existing methods for matching multi-modal remote sensing images (MRSI) demonstrate considerable adaptability. However, high-precision matching for rectification remains challenging due to differing imaging mechanisms in cross-modal remote sensing images, leading to numerous non-repeated detailed feature points. Additionally, assuming linear transformations between images conflicts with the complex aberrations present in remote sensing images, limiting matching accuracy. This paper aims to elevate matching accuracy by implementing a detailed texture removal strategy that effectively isolates repeatable structural features. Subsequently, we construct a radiation-invariant similarity function within a generalized gradient framework for least-squares matching, specifically designed to mitigate nonlinear geometric and radiometric distortions across MRSIs. Comprehensive qualitative and quantitative evaluations across multiple datasets, employing substantial manual checkpoints, demonstrate that our method significantly enhances matching accuracy for image data involving multiple modal combinations and outperforms the current state-of-the-art solutions in matching accuracy. Additionally, rectification experiments employing WorldView and TanDEM-X images validate our method’s ability to achieve a matching accuracy of 1.05 pixels, thereby indicating its practical utility and generalization capacity. Access to experiment-related data and codes will be provided at https://github.com/LiaoYF001/refinement/.
- Published
- 2024
- Full Text
- View/download PDF