6 results on '"heterogeneous remote sensing image"'
Search Results
2. 非线性尺度空间改进的光学与 SAR 影像 自动配准.
- Author
-
姚国标, 张成成, 龚健雅, 张现军, and 李 兵
- Subjects
- *
SYNTHETIC apertures , *BURGERS' equation , *SYNTHETIC aperture radar , *OPTICAL images , *REMOTE sensing , *EUCLIDEAN distance , *RADIOMETRY - Abstract
Objectives: It is difficult to solve the matching problem between heterogeneous remote sensing images caused by nonlinear radiometric distortions. Methods: This paper proposes a nonlinear scale-space enhanced automatic matching method for optical and synthetic aperture radar (SAR) images. First, by modifying the calculation of color pixel contrast, the contrast information of images is effectively enhanced. As a result, the repeatability of corresponding points between optical and SAR images can be improved. Second, a nonlinear diffusion equation is employed to describe the image diffusion characteristics, avoiding the issue of boundary blurring in the Gaussian scale-space. Third, the multi-scale ratio of exponentially weighted averages operator and the Sobel operator are utilized to compute the gradient information of SAR and optical images, respectively, followed by the stable extraction of Harris feature points. Finally, log-polar de‑ scriptor framework is employed to compute a high discriminate feature vector, and the outliers are eliminat‑ ed by Euclidean distance and fast sample consensus algorithm. Results: The experimental results demonstrate that the proposed method can get more matching points and achieve higher matching accuracy, compared Objectives: It is difficult to solve the matching problem between heterogeneous remote sensing images caused by nonlinear radiometric distortions. Methods: This paper proposes a nonlinear scale-space enhanced automatic matching method for optical and synthetic aperture radar (SAR) images. First, by modifying the calculation of color pixel contrast, the contrast information of images is effectively enhanced. As a result, the repeatability of corresponding points between optical and SAR images can be improved. Second, a nonlinear diffusion equation is employed to describe the image diffusion characteristics, avoiding the issue of boundary blurring in the Gaussian scale-space. Third, the multi-scale ratio of exponentially weighted averages operator and the Sobel operator are utilized to compute the gradient information of SAR and optical images, respectively, followed by the stable extraction of Harris feature points. Finally, log-polar de‑ scriptor framework is employed to compute a high discriminate feature vector, and the outliers are eliminat‑ ed by Euclidean distance and fast sample consensus algorithm. Results: The experimental results demonstrate that the proposed method can get more matching points and achieve higher matching accuracy, compared with other classic methods. Conclusions: The proposed method can realize automatic and robust matching for SAR and optical images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A Heterogeneous Remote Sensing Image Matching Method for Urban Areas With Complex Terrain Based on 3D Spatial Relationship Constraints
- Author
-
Yao Zheng, Shuwen Yang, Yikun Li, Jinsha Wu, Zhuang Shi, and Ruixiong Kou
- Subjects
3D spatial relationship ,height estimation ,heterogeneous remote sensing image ,image registration ,phase consistency ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Heterogeneous high-resolution remote sensing image matching will be disturbed by the differences in sensor type, imaging angle, height, and imaging time, and the matching difficulty is further increased in complex scenes with dense urban buildings and noticeable height differences. This article proposes a method for matching heterogeneous high-resolution remote sensing images based on partitioned feature extraction and three-dimensional spatial constraints. First, this article conducts image partitioning based on the geometric differences of ground objects. Two feature extraction methods, namely adaptive phase threshold and weighted moment map, are employed to extract feature points independently. To address the issue of inaccurate feature descriptions caused by drastic changes in viewing angles in buildings, we construct a robust feature descriptor by combining a multiscale phase weighted energy convolution histogram with a new gradient location orientation histogram-like local feature descriptor. In addition, a new similarity measure incorporating three-dimensional spatial constraints and the marginalizing sample consensus method is applied to eliminate mismatched point pairs, ensuring the acquisition of precise matching points. Based on the feature detection results of two different synthetic data sets, it is evident that the proposed detector outperforms the three classical detectors in terms of repeatability and uniformity. Ultimately, the matching performance is experimentally verified on six groups of heterogeneous high-resolution remote sensing images. The experimental results show that the proposed method significantly outperforms RIFT, HAPCG, and MS-HLMO methods and achieves the best matching accuracy results.
- Published
- 2024
- Full Text
- View/download PDF
4. Robust Descriptor Algorithm Considering the Changing Gray Value Trends Inside Ground Objects for Heterogeneous Optical Image Matching
- Author
-
Li Xue, Yehua Sheng, and Ka Zhang
- Subjects
Change trend of gray values ,feature descriptor ,heterogeneous remote sensing image ,image matching ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Differences in sensor types, resolutions, and imaging conditions can lead to considerable spectral differences in heterogeneous optical remote sensing images and the similarity of scale-invariant feature transform (SIFT) or local self-similarities (LSS) feature descriptors of the same point can be poor. Consequently, we proposed a robust descriptor construction algorithm considering the changing gray values inside ground objects. The main contributions of this article include the following. First, based on the stability of the internal gray value changes of ground objects, we suggest that the change orientations and degrees of gray values of pixels can be used to express the stability of the same area of heterogeneous images, providing the basis for image matching; second, unlike many existing methods that use gradient information to calculate feature orientation and descriptors, the proposed algorithm uses change orientation and degree to calculate the feature orientation and descriptor, enabling it to obtain stable descriptors in image matching with large illumination changes. Experimental analysis of homologous and heterogeneous optical remote sensing images demonstrated the superior stability and capability of the proposed algorithm over commonly used algorithms, including the radiation-invariant feature transform, adaptive binning SIFT, gradient orientation modification SIFT, and LSS algorithms.
- Published
- 2023
- Full Text
- View/download PDF
5. TSCNet: Topological Structure Coupling Network for Change Detection of Heterogeneous Remote Sensing Images.
- Author
-
Wang, Xianghai, Cheng, Wei, Feng, Yining, and Song, Ruoxi
- Subjects
- *
WAVELET transforms , *CONVOLUTIONAL neural networks , *DEEP learning - Abstract
With the development of deep learning, convolutional neural networks (CNNs) have been successfully applied in the field of change detection in heterogeneous remote sensing (RS) images and achieved remarkable results. However, most of the existing methods of heterogeneous RS image change detection only extract deep features to realize the whole image transformation and ignore the description of the topological structure composed of the image texture, edge, and direction information. The occurrence of change often means that the topological structure of the ground object has changed. As a result, these algorithms severely limit the performance of change detection. To solve these problems, this paper proposes a new topology-coupling-based heterogeneous RS image change detection network (TSCNet). TSCNet transforms the feature space of heterogeneous images using an encoder–decoder structure and introduces wavelet transform, channel, and spatial attention mechanisms. The wavelet transform can obtain the details of each direction of the image and effectively capture the image's texture features. Unnecessary features are suppressed by allocating more weight to areas of interest via channels and spatial attention mechanisms. As a result of the organic combination of a wavelet, channel attention mechanism, and spatial attention mechanism, the network can focus on the texture information of interest while suppressing the difference of images from different domains. On this basis, a bitemporal heterogeneous RS image change detection method based on the TSCNet framework is proposed. The experimental results on three public heterogeneous RS image change detection datasets demonstrate that the proposed change detection framework achieves significant improvements over the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. DMF2Net: Dynamic multi-level feature fusion network for heterogeneous remote sensing image change detection.
- Author
-
Cheng, Wei, Feng, Yining, Song, Liyang, and Wang, Xianghai
- Subjects
- *
ARTIFICIAL neural networks , *PROCESS capability , *REMOTE sensing , *MULTISENSOR data fusion , *INFORMATION processing - Abstract
With the rapid development of remote sensing data fusion technology, heterogeneous remote sensing image (HRSI) change detection (CD) has become a frontier field. The powerful nonlinear information processing capability of deep neural network provides the possibility for image domain conversion of HRSIs. However, the majority of existing methods rely on stacking network layers to extract deep high-level semantic features to accomplish the mutual conversion of image domains. The effective extraction and utilization of shallow features have not been fully considered. At the same time, the dependence between deep and shallow features has not been deeply explored. Therefore, these algorithms severely restrict the performance of CD. A new HRSI-CD network (DMF2Net) based on multi-level feature fusion is proposed in this paper to address these issues. DMF2Net uses central difference convolution to extract fine-grained features from shallow layers of images. It is capable of capturing the intrinsic detail features of image by aggregating intensity and gradient information. The dynamic multi-level feature fusion method is used to learn the fusion weights from the features, which are then used to guide the organic fusion of shallow and deep semantic features. This can preserve more position and detail features of the image, which can prevent the loss of small data during image conversion and enhance the model's ability to detect subtle changes. On this basis, a new DMF2Net-based method for detecting change in bi-temporal HRSIs is proposed. We conducted extensive experiments on four publicly available HRSI-CD datasets. The experimental results showed that the proposed CD framework achieves significant improvement over the most advanced methods. The project file of the proposed framework will be obtained from https://github.com/cwlnnu/DMF2Net. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.