1,376 results
Search Results
2. The U. V. Helava Award – Best Paper Volumes 171-182 (2021).
- Published
- 2022
- Full Text
- View/download PDF
3. Augmented paper maps: Exploring the design space of a mixed reality system
- Author
-
Paelke, Volker and Sester, Monika
- Subjects
- *
MOBILE communication systems , *ELECTRONIC equipment , *MAPS , *HIKING , *GLOBAL Positioning System , *INFORMATION processing , *REAL-time computing - Abstract
Abstract: Paper maps and mobile electronic devices have complementary strengths and shortcomings in outdoor use. In many scenarios, like small craft sailing or cross-country trekking, a complete replacement of maps is neither useful nor desirable. Paper maps are fail-safe, relatively cheap, offer superior resolution and provide large scale overview. In uses like open-water sailing it is therefore mandatory to carry adequate maps/charts. GPS based mobile devices, on the other hand, offer useful features like automatic positioning and plotting, real-time information update and dynamic adaptation to user requirements. While paper maps are now commonly used in combination with mobile GPS devices, there is no meaningful integration between the two, and the combined use leads to a number of interaction problems and potential safety issues. In this paper we explore the design space of augmented paper maps in which maps are augmented with additional functionality through a mobile device to achieve a meaningful integration between device and map that combines their respective strengths. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
4. The U. V. Helava Award – Best Paper Volumes 147-158 (2019).
- Subjects
- *
AWARDS , *BATHYMETRIC maps , *REMOTE sensing - Published
- 2020
- Full Text
- View/download PDF
5. The U. V. Helava Award – Best Paper Volumes 159–170 (2020).
- Author
-
Weng, Qihao
- Published
- 2021
- Full Text
- View/download PDF
6. Theme issue “Papers from Geospatial Week 2015”.
- Author
-
Paparoditis, Nicolas and Dowman, Ian
- Subjects
- *
GEOSPATIAL data , *REMOTE sensing - Published
- 2017
- Full Text
- View/download PDF
7. The U. V. Helava Award – Best Paper Volumes 135-146 (2018).
- Subjects
- *
AWARDS , *JURORS , *FOURTH of July - Published
- 2019
- Full Text
- View/download PDF
8. GITomo-Net: Geometry-independent deep learning imaging method for SAR tomography.
- Author
-
Liu, Changhao, Wang, Yan, Zhang, Guangbin, Ding, Zegang, and Zeng, Tao
- Abstract
The utilization of deep learning in Tomographic SAR (TomoSAR) three-dimensional (3D) imaging technology addresses the inefficiency inherent in traditional compressed Sensing (CS)-based TomoSAR algorithms. However, current deep learning TomoSAR imaging methods heavily depend on prior knowledge of observation geometries, as the network training requires a predefined observation prior distribution. Additionally, discrepancies often exist between actual and designed observations in a TomoSAR task, making it challenging to train imaging networks before the task begins. Therefore, the current TomoSAR imaging networks suffer from high costs and lack universality. This paper introduces a new geometry-independent deep learning-based method for TomoSAR without the necessity of geometry as prior information, forming an adaptability to different observation geometries. First, a novel geometry-independent deep learning imaging model is introduced to adapt TomoSAR imaging tasks with unknown observation geometries by consolidating the data features of multiple geometries. Second, a geometry-independent TomoSAR imaging network (GITomo-Net) is proposed to adapt the new geometry-independent deep learning imaging model by introducing a transformation-feature normalization (TFN) module and a fully connected-based feature extraction (FCFE) layer, enabling the network to be capable of handling multi-geometries tasks. The proposed method has been validated using real spaceborne SAR data experiments. The average gradient (AG) and image entropy (IE) metrics for the Regent Beijing Hotel region are 7.11 and 2.85, respectively, while those for the COFCO Plaza region are 3.90 and 1.73, respectively. Compared to the advanced deep learning-based TomoSAR imaging method MAda-Net, the proposed method achieves higher imaging accuracy when network training is conducted without prior knowledge of the observation configuration. Additionally, compared to the advanced CS-based TomoSAR imaging method, the proposed method delivers comparable accuracy while improving efficiency by 51.6 times. The code and the data of our paper are available at https://github.com/Sunshine-lch/Paper_Geometry-Idenpendent-TomoSAR-imaging.git. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Multiscale adaptive PolSAR image superpixel generation based on local iterative clustering and polarimetric scattering features.
- Author
-
Li, Nengcai, Xiang, Deliang, Sun, Xiaokun, Hu, Canbin, and Su, Yi
- Abstract
Superpixel generation is an essential preprocessing step for intelligent interpretation of object-level Polarimetric Synthetic Aperture Radar (PolSAR) images. The Simple Linear Iterative Clustering (SLIC) algorithm has become one of the primary methods for superpixel generation in PolSAR images due to its advantages of minimal human intervention and ease of implementation. However, existing SLIC-based superpixel generation methods for PolSAR images often use distance measures based on the complex Wishart distribution as the similarity metric. These methods are not ideal for segmenting heterogeneous regions, and a single superpixel generation result cannot simultaneously extract coarse and fine levels of detail in the image. To address this, this paper proposes a multiscale adaptive superpixel generation method for PolSAR images based on SLIC. To tackle the issue of the complex Wishart distribution's inaccuracy in modeling urban heterogeneous regions, this paper employs the polarimetric target decomposition method. It extracts the polarimetric scattering features of the land cover, then constructs a similarity measure for these features using Riemannian metric. To achieve multiscale superpixel segmentation in a single superpixel segmentation process, this paper introduces a new method for initializing cluster centers based on polarimetric homogeneity measure. This initialization method assigns denser cluster centers in heterogeneous areas and automatically adjusts the size of the search regions according to the polarimetric homogeneity measure. Finally, a novel clustering distance metric is defined, integrating multiple types of information, including polarimetric scattering feature similarity, power feature similarity, and spatial similarity. This metric uses the polarimetric homogeneity measure to adaptively balance the relative weights between the various similarities. Comparative experiments were conducted using three real PolSAR datasets with state-of-the-art SLIC-based methods (Qin-RW and Yin-HLT). The results demonstrate that the proposed method provides richer multiscale detail information and significantly improves segmentation outcomes. For example, with the AIRSAR dataset and the step size of 42, the proposed method achieves improvements of 16.56 % in BR and 12.01 % in ASA compared to the Qin-RW method. Source code of the proposed method is made available at https://github.com/linengcai/PolSAR_MS_ASLIC.git. • Proposed a polarimetric scattering feature similarity measure to describe the difference of land covers. • Proposed a multiscale initialization clustering center method to achieve multiscale information mining. • Proposed a multi-feature adaptive clustering distance metric to improve the effectiveness of superpixel segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. HDRSA-Net: Hybrid dynamic residual self-attention network for SAR-assisted optical image cloud and shadow removal.
- Author
-
Pan, Jun, Xu, Jiangong, Yu, Xiaoyu, Ye, Guo, Wang, Mi, Chen, Yumin, and Ma, Jianshen
- Subjects
- *
SYNTHETIC aperture radar , *SPECKLE interference , *SURFACE of the earth , *OPTICAL images , *MULTISENSOR data fusion - Abstract
Clouds and shadows often contaminate optical remote sensing images, resulting in missing information. Consequently, continuous spatiotemporal monitoring of the Earth's surface requires the efficient removal of clouds and shadows. Unlike optical satellites, synthetic aperture radar (SAR) has active imaging capabilities in all weather conditions, supplying valuable supplementary information for reconstructing missing regions. Nevertheless, the reconstruction of high-fidelity cloud-free images based on SAR-optical data fusion remains challenging due to differences in imaging mechanisms and the considerable contamination from speckle noise inherent in SAR imagery. To solve the aforementioned challenges, this paper presents a novel hybrid dynamic residual self-attention network (HDRSA-Net), aiming to fully exploit the potential of SAR images in reconstructing missing regions. The proposed HDRSA-Net comprises multiple dynamic interaction residual (DIR) groups organized into an end-to-end trainable deep hierarchical stacked architecture. Specifically, the omni-dimensional dynamic local exploration (ODDLE) module and the sparse global context aggregation (SGCA) module are used to form a local–global feature adaptive extraction and implicit enhancement. A multi-task cooperative optimization loss function is designed to ensure that the results exhibit high spectral fidelity and coherent spatial structures. Additionally, this paper releases a large dataset that can comprehensively evaluate the reconstruction quality under different cloud coverages and various types of ground cover, providing a solid foundation for restoring satisfactory sensory effects and reliable semantic application value. In comparison to the current representative algorithms, the presented approach exhibits effectiveness and advancement in reconstructing missing regions with stability. The project is accessible at: https://github.com/RSIIPAC/LuojiaSET-OSFCR. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Mesh refinement method for multi-view stereo with unary operations.
- Author
-
Liu, Jianchen, Han, Shuang, and Li, Jin
- Subjects
- *
DEGREES of freedom , *GAUSSIAN curvature , *ENERGY function , *ALGORITHMS , *NOISE - Abstract
[Display omitted] 3D reconstruction is an important part of digital city, high-accuracy 3D modeling method has been widely studied as an important pathway to visualizing 3D city scenes. However, the problems of image resolution, noise, and occlusion result in low quality and smooth features in the mesh model. Therefore, the model needs to be refined to improve the mesh quality and enhance the visual effect. This paper proposes a mesh refinement algorithm to fine-tune the vertices of the mesh and constrain their evolution direction on the normal vector, reducing their freedom degrees to one. The evolution of vertices only involves one motion distance parameter on the normal vector, simplifying the complexity of the energy function derivation. Meanwhile, Gaussian curvature is used as a regularization term, which is anisotropic and preserves the edge features during the reconstruction process. The mesh refinement algorithm with unary operations fully utilizes the original image information and effectively enriches the local detail features of the mesh model. This paper utilizes five public datasets to conduct comparative experiments, and the experimental results show that the proposed algorithm can better restore the detailed features of the model and has a better refinement effect in the same number of iterations compared with OpenMVS library refinement algorithm. At the same time, in the comparison of refinement results with fewer iterations, the algorithm in this paper can achieve more desirable results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Cross-modal change detection using historical land use maps and current remote sensing images.
- Author
-
Deng, Kai, Hu, Xiangyun, Zhang, Zhili, Su, Bo, Feng, Cunjun, Zhan, Yuanzeng, Wang, Xingkun, and Duan, Yansong
- Subjects
- *
LAND use mapping , *TRANSFORMER models , *REMOTE sensing , *URBAN growth , *LAND resource - Abstract
Using bi-temporal remote sensing imagery to detect land in urban expansion has become a common practice. However, in the process of updating land resource surveys, directly detecting changes between historical land use maps (referred to as "maps" in this paper) and current remote sensing images (referred to as "images" in this paper) is more direct and efficient than relying on bi-temporal image comparisons. The difficulty stems from the substantial modality differences between maps and images, presenting a complex challenge for effective change detection. To address this issue, in this paper, we propose a novel deep learning model named the cross-modal patch alignment network (CMPANet), which bridges the gap between different modalities for cross-modal change detection (CMCD) between maps and images. Our proposed model uses a vision transformer (ViT-B/16) fine-tuned on 1.8 million remote sensing images as an encoder for images and trainable ViTs as the encoder for maps. To bridge the distribution differences between these encoders, we introduce a feature domain adaptation image-map alignment module (IMAM) to transfer and share pretrained model knowledge rapidly. Additionally, we incorporate the cross-modal and cross-channel attention (CCMAT) module and the transformer block attention module to facilitate the interaction and fusion of features across modalities. These fused features are then processed through a UperNet-based feature pyramid to generate pixel-level change maps. These fused features are then processed through a UperNet-based feature pyramid to generate pixel-level change maps. On the newly created EVLab-CMCD dataset and the publicly available HRSCD dataset, CMPANet has achieved state-of-the-art results and offers a novel technical approach for CMCD between maps and images. • A novel cross-modal change detection network (CMPANet) between maps and images was proposed. • An effective map and image feature domain adaptation was introduced. • CMPANet achieves the best performance in cross-modal change detection between maps and images. • The first map-to-image change detection dataset was released. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Clustering, triangulation, and evaluation of 3D lines in multiple images.
- Author
-
Wei, Dong, Guo, Haoyu, Wan, Yi, Zhang, Yongjun, Li, Chang, and Wang, Guangshuai
- Subjects
- *
GEOMETRIC shapes , *SOURCE code , *TRIANGULATION , *EVALUATION methodology , *C++ - Abstract
Three-dimensional (3D) lines require further enhancement in both clustering and triangulation. Line clustering assigns multiple image lines to a single 3D line to eliminate redundant 3D lines. Currently, it depends on the fixed and empirical parameter. However, a loose parameter could lead to over-clustering, while a strict one may cause redundant 3D lines. Due to the absence of the ground truth, the assessment of line clustering remains unexplored. Additionally, 3D line triangulation, which determines the 3D line segment in object space, is prone to failure due to its sensitivity to positional and camera errors. This paper aims to improve the clustering and triangulation of 3D lines and to offer a reliable evaluation method. (1) To achieve accurate clustering, we introduce a probability model, which uses the prior error of the structure from the motion, to determine adaptive thresholds; thus controlling false clustering caused by the fixed hyperparameter. (2) For robust triangulation, we employ a universal framework that refines the 3D line with various forms of geometric consistency. (3) For a reliable evaluation, we investigate consistent patterns in urban environments to evaluate the clustering and triangulation, eliminating the need to manually draw the ground truth. To evaluate our method, we utilized datasets of Internet image, totaling over ten thousand images, alongside aerial images with dimensions exceeding ten thousand pixels. We compared our approach to state-of-the-art methods, including Line3D++, Limap, and ELSR. In these datasets, our method demonstrated improvements in clustering and triangulation accuracy by at least 20% and 3%, respectively. Additionally, our method ranked second in execution speed, surpassed only by ELSR, the current fastest algorithm. The C++ source code for the proposed algorithm, along with the dataset used in this paper, is available at https://github.com/weidong-whu/3DLineResconstruction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Mangrove mapping in China using Gaussian mixture model with a novel mangrove index (SSMI) derived from optical and SAR imagery.
- Author
-
Chen, Zhaojun, Zhang, Huaiqing, Zhang, Meng, Wu, Yehong, and Liu, Yang
- Subjects
- *
GAUSSIAN mixture models , *MANGROVE forests , *RESTORATION ecology , *MANGROVE plants , *FOREST mapping , *BACKSCATTERING , *LAND cover - Abstract
As an important shoreline vegetation and highly productive ecosystem, mangroves play an essential role in the protection of coastlines and ecological diversity. Accurate mapping of the spatial distribution of mangroves is crucial for the protection and restoration of mangrove ecosystems. Supervised classification methods rely on large sample sets and complex classifiers and traditional thresholding methods that require empirical thresholds, given the problems that limit the feasibility and stability of existing mangrove identification and mapping methods on large scales. Thus, this paper develops a novel mangrove index (spectral and SAR mangrove index, SSMI) and Gaussian mixture model (GMM) mangrove mapping method, which does not require training samples and can automatically and accurately map mangrove boundaries by utilizing only single-scene Sentinel-1 and single-scene Sentinel-2 images from the same time period. The SSMI capitalizes on the fact that mangroves are differentiated from other land cover types in terms of optical characteristics (greenness and moisture) and backscattering coefficients of SAR images and ultimately highlights mangrove forest information through the product of three expressions (f (S) = red egde/SWIR1, f (B) = 1/(1 + e-VH), f (W)=(NIR-SWIR1)/(NIR+SWIR1)). The proposed SSMI was tested in six typical mangrove distribution areas in China where climatic conditions and mangrove species vary widely. The results indicated that the SSMI was more capable of mapping mangrove forests than the other mangrove indices (CMRI, NDMI, MVI, and MI), with overall accuracys (OA) higher than 0.90 and F1 scores as high as 0.93 for the other five areas except for the Maowei Gulf (S5). Moreover, the mangrove maps generated by the SSMI were highly consistent with the reference maps (HGMF_2020、LASAC_2018 and IMMA). In addition, the SSMI achieves stable performance, as shown by the mapping results of the other two classification methods (K-means and Otsu's algorithm). Mangrove mapping in six typical mangrove distribution areas in China for five consecutive years (2019–2023) and experiments in three Southeast Asian countries with major mangrove distributions (Thailand, Vietnam, and Indonesia) demonstrated that the SSMIs constructed in this paper are highly stable across time and space. The SSMI proposed in this paper does not require reference samples or predefined parameters; thus, it has great flexibility and applicability in mapping mangroves on a large scale, especially in cloudy areas. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Mineral detection based on hyperspectral remote sensing imagery on Mars: From detection methods to fine mapping.
- Author
-
Ke, Tian, Zhong, Yanfei, Song, Mi, Wang, Xinyu, and Zhang, Liangpei
- Subjects
- *
MARTIAN surface , *REMOTE sensing , *SOURCE code , *MARS (Planet) , *MINERALS - Abstract
Hyperspectral remote sensing is a commonly used technical means for mineral detection on the Martian surface, which has important implications for the study of Martian geological evolution and the study for potential biological signatures. The increasing volume of Martian remote sensing data and complex issues such as the intimate mixture of Martian minerals make research on Martian mineral detection challenging. This paper summarizes the existing achievements by analyzing the papers published in recent years and looks forward to the future research directions. Specifically, this paper introduces the currently used hyperspectral remote sensing data of Mars and systematically analyzes the characteristics and distribution of Martian minerals. The existing methods are then divided into two groups, according to their core idea, i.e., methods based on pixels and methods based on subpixels. In addition, some applications of Martian mineral detection at global and local scales are analyzed. Furthermore, the various typical methods are compared using synthetic and real data to assess their performance. The conclusion is drawn that approach based on spectral unmixing is more applicable to areas with limited and unknown mineral categories than pixel-based methods. Among them, the fully autonomous hyperspectral unmixing method can improve the overall accuracy in real CRISM images and has great potential for Martian mineral detection. The development trends are analyzed from three aspects. Firstly, in terms of data, a more complete spectral library, covering more spectral information of the Martian surface minerals, should be constructed to assist with mineral detection. Secondly, in terms of methods, spectral unmixing methods based on a nonlinear mixing model and a new generation of data-driven detection paradigms guided by Mars mineral knowledge should be developed. Finally, in terms of application, the global mapping of Martian minerals toward a more intelligent, global scale, and refined direction should be targeted in the future. The data and source code in the experiment are available at http://rsidea.whu.edu.cn/Martian_mineral_detection.htm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. The U. V. Helava Award – Best Paper Volumes 123-134 (2017).
- Subjects
- *
PHOTOGRAMMETRY , *REMOTE sensing , *IMAGE quality analysis - Published
- 2018
- Full Text
- View/download PDF
17. CodeUNet: Autonomous underwater vehicle real visual enhancement via underwater codebook priors.
- Author
-
Wang, Linling, Xu, Xiaoyan, An, Shunmin, Han, Bing, and Guo, Yi
- Subjects
- *
AUTONOMOUS underwater vehicles , *IMAGE intensifiers , *PRIOR learning , *EVALUATION methodology , *GENERALIZATION - Abstract
The vision enhancement of autonomous underwater vehicle (AUV) has received increasing attention and rapid development in recent years. However, existing methods based on prior knowledge struggle to adapt to all scenarios, while learning-based approaches lack paired datasets from real-world scenes, limiting their enhancement capabilities. Consequently, this severely hampers their generalization and application in AUVs. Besides, the existing deep learning-based methods largely overlook the advantages of prior knowledge-based approaches. To address the aforementioned issues, a novel architecture called CodeUNet is proposed in this paper. Instead of relying on physical scattering models, a real-world scene vision enhancement network based on a codebook prior is considered. First, the VQGAN is pretrained on underwater datasets to obtain a discrete codebook, encapsulating the underwater priors (UPs). The decoder is equipped with a novel feature alignment module that effectively leverages underwater features to generate clean results. Then, the distance between the features and the matches is recalibrated by controllable matching operations, enabling better matching. Extensive experiments demonstrate that CodeUNet outperforms state-of-the-art methods in terms of visual quality and quantitative metrics. The testing results of geometric rotation, SIFT salient point detection, and edge detection applications are shown in this paper, providing strong evidence for the feasibility of CodeUNet in the field of autonomous underwater vehicles. Specifically, on the full reference dataset, the proposed method outperforms most of the 14 state-of-the-art methods in four evaluation metrics, with an improvement of up to 3.7722 compared to MLLE. On the no-reference dataset, the proposed method achieves excellent results, with an improvement of up to 0.0362 compared to MLLE. Links to the dataset and code for this project can be found at: https://github.com/An-Shunmin/CodeUNet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. The U.V. Helava Award — Best Paper Volume 62 (2007)
- Author
-
Vosselman, George
- Published
- 2008
- Full Text
- View/download PDF
19. The U.V. Helava Award—Best Paper 2002
- Published
- 2004
- Full Text
- View/download PDF
20. The U. V. Helava Award – Best Paper Volumes 87–98 (2014).
- Subjects
- *
PHOTOGRAMMETRY , *REMOTE sensing , *IMAGE reconstruction - Published
- 2015
- Full Text
- View/download PDF
21. Photogrammetric Computer Vision 2014 – Best Papers of the ISPRS Technical Commission III Symposium.
- Author
-
Schindler, Konrad
- Subjects
- *
PHOTOGRAMMETRY , *COMPUTER vision , *CONFERENCES & conventions - Published
- 2015
- Full Text
- View/download PDF
22. Quick calibration of massive urban outdoor surveillance cameras.
- Author
-
Shi, Lin, Lan, Xiaoji, Lan, Xin, and Zhang, Tianliang
- Subjects
- *
VIDEO surveillance , *COMPUTER vision , *URBAN transportation , *SMART cities , *CALIBRATION , *SPACE vehicles - Abstract
The wide application of urban outdoor surveillance systems has greatly improved the efficiency of urban management and social security index. However, most of the existing urban outdoor surveillance cameras lack the records of important parameters such as geospatial coordinates, field of view angle and lens distortion, which brings difficulties to the unified management and layout optimization of the cameras, geospatial analysis of video data, and the computer vision applications such as the trajectory tracking of moving targets. To address this problem, this paper designs a marker with a chessboard pattern and a positioning device, makes the marker move in outdoor space through vehicles and other mobile carriers, and utilizes the marker image captured by the surveillance camera and the spatial position information obtained by the positioning device to batch calibrate the outdoor surveillance cameras and calculate its geospatial coordinates and field of view angle, which achieves the rapid acquisition of important parameters of the surveillance camera, and provides a new method for the rapid calibration of urban outdoor surveillance cameras, which contributes to the informationization management of urban surveillance resources and the spatial analysis and computation of surveillance video data, and make it play a greater role in the application of smart transportation and smart city. Taking the outdoor surveillance cameras within 2.5Km2 of a city as an example, calibration tests were performed on 295 surveillance cameras in the test area, and the geospatial coordinates, field of view angle and lens parameters of 269 surveillance cameras were obtained, and the average error of the spatial position was 0.527 m, and the maximum error was 1.573 m, and the average error of the field of view angle was 1.63°, and the maximum error was 3.4°, which verified the effectiveness and accuracy of the method in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Call for papers for Theme Issue: High-Resolution Earth Imaging for Geospatial Information.
- Published
- 2013
- Full Text
- View/download PDF
24. Call for Papers-Theme Issue “Global Land Cover Mapping and Monitoring: Progress, Challenges, and opportunities”
- Published
- 2013
- Full Text
- View/download PDF
25. The U.V. Helava Award – Best Paper Volume 65 (2010)
- Published
- 2011
- Full Text
- View/download PDF
26. Call for papers
- Published
- 2011
- Full Text
- View/download PDF
27. The U.V. Helava Award — Best Paper Volume 64 (2009)
- Author
-
Vosselman, George
- Subjects
- *
AWARDS , *PHOTOGRAMMETRY , *REMOTE sensing , *OCEAN surface topography - Abstract
Abstract: x [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
28. Call for papers
- Published
- 2010
- Full Text
- View/download PDF
29. The U.V. Helava Award — Best Paper Volume 63 (2008)
- Author
-
Vosselman, George
- Published
- 2010
- Full Text
- View/download PDF
30. Call for Papers
- Published
- 2009
- Full Text
- View/download PDF
31. The U.V. Helava Award — Best Paper Volume 60 (2005)
- Published
- 2007
- Full Text
- View/download PDF
32. The U.V. Helava Award — Best paper volume 59 (2004)
- Published
- 2007
- Full Text
- View/download PDF
33. Call for Papers
- Published
- 2007
- Full Text
- View/download PDF
34. The U.V. Helava Award — Best Paper 2003
- Published
- 2004
- Full Text
- View/download PDF
35. The U.V. Helava Award—Best Paper 2001
- Published
- 2004
- Full Text
- View/download PDF
36. Call for Papers.
- Published
- 2002
- Full Text
- View/download PDF
37. Call for Papers.
- Published
- 2002
- Full Text
- View/download PDF
38. Semantic change detection using a hierarchical semantic graph interaction network from high-resolution remote sensing images.
- Author
-
Long, Jiang, Li, Mengmeng, Wang, Xiaoqin, and Stein, Alfred
- Subjects
- *
REMOTE-sensing images , *DESIGN - Abstract
Current semantic change detection (SCD) methods face challenges in modeling temporal correlations (TCs) between bitemporal semantic features and difference features. These methods lead to inaccurate detection results, particularly for complex SCD scenarios. This paper presents a hierarchical semantic graph interaction network (HGINet) for SCD from high-resolution remote sensing images. This multitask neural network combines semantic segmentation and change detection tasks. For semantic segmentation, we construct a multilevel perceptual aggregation network with a pyramidal architecture. It extracts semantic features that discriminate between different categories at multiple levels. We model the correlations between bitemporal semantic features using a TC module that enhances the identification of unchanged areas. For change detection, we design a semantic difference interaction module based on a graph convolutional network. It measures the interactions among bitemporal semantic features, their corresponding difference features, and the combination of both. Extensive experiments on four datasets, namely SECOND, HRSCD, Fuzhou, and Xiamen, show that HGINet performs better in identifying changed areas and categories across various scenarios and regions than nine existing methods. Compared with the existing methods applied on the four datasets, it achieves the highest F 1 scd values of 59.48%, 64.12%, 64.45%, and 84.93%, and SeK values of 19.34%, 14.55%, 18.28%, and 51.12%, respectively. Moreover, HGINet mitigates the influence of fake changes caused by seasonal effects, producing results with well-delineated boundaries and shapes. Furthermore, HGINet trained on the Fuzhou dataset is successfully transferred to the Xiamen dataset, demonstrating its effectiveness and robustness in identifying changed areas and categories from high-resolution remote sensing images. The code of our paper is accessible at https://github.com/long123524/HGINet-torch. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Recognition for SAR deformation military target from a new MiniSAR dataset using multi-view joint transformer approach.
- Author
-
Lv, Jiming, Zhu, Daiyin, Geng, Zhe, Han, Shengliang, Wang, Yu, Ye, Zheng, Zhou, Tao, Chen, Hongren, and Huang, Jiawei
- Subjects
- *
TRANSFORMER models , *SYNTHETIC aperture radar , *IMAGE denoising , *RECOGNITION (Psychology) , *TARGET acquisition - Abstract
Accurately detecting ground armored weapons is crucial for achieving initiative advantages in military operations. Generally, satellite or airborne synthetic aperture radar (SAR) systems face limitations due to their revisit cycles and fixed flight trajectories, resulting in single-view imaging of targets, thereby hampering the recognition of small SAR ground targets. In contrast, MiniSAR possesses the capability to capture the multi-view of a target by acquiring images from different azimuth angles. In this research, our team utilizes a self-developed MiniSAR system to generate multi-view SAR images of real ground armored targets and recognize targets. However, the recognition of small targets in SAR images encounters two significant difficulties. First, small targets in SAR images are prone to interference from background noise. Second, SAR target deformation arises from variations in depression angles and imaging processes. To tackle these difficulties, this paper proposes a novel SAR ground deformation target recognition approach based on a joint multi-view transformer model. The method first preprocesses SAR images based on a low-frequency priori SAR image denoising method. Next, it obtains multi-view joint information through a self-attentive mechanism, inputs joint features to the transformer structure. The outputs are jointly updated by a multi-way averaging adaptive loss function to improve the recognition accuracy of deformed targets. The experimental results demonstrate the superiority of the proposed method in SAR ground deformation target recognition, outperforming other representative approaches such as information fusion of target and shadow (IFTS) and Vision Transformer (ViT). It is concluded that the proposed method has high recognition accuracies of 98.37% and 93.86 % on the moving and stationary target acquisition and recognition (Mstar) and our SAR images dataset, respectively, in the field of SAR ground deformation target recognition. We have included links to the code and data in the abstract of this paper for ease of access. The source code and sample dataset are available at https://github.com/Lvjiming/MJT. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. WHU-Urban3D: An urban scene LiDAR point cloud dataset for semantic instance segmentation.
- Author
-
Han, Xu, Liu, Chong, Zhou, Yuzhou, Tan, Kai, Dong, Zhen, and Yang, Bisheng
- Subjects
- *
POINT cloud , *MACHINE learning , *LIDAR , *AIRBORNE lasers , *CITIES & towns - Abstract
With the rapid advancement of 3D sensors, there is an increasing demand for 3D scene understanding and an increasing number of 3D deep learning algorithms have been proposed. However, a large-scale and richly annotated 3D point cloud dataset is critical to understanding complicated road and urban scenes. Motivated by the need to bridge the gap between the rising demand for 3D urban scene understanding and limited LiDAR point cloud datasets, this paper proposes a richly annotated WHU-Urban3D dataset and an effective method for semantic instance segmentation. WHU-Urban3D stands out from existing datasets due to its distinctive features: (1) extensive coverage of both Airborne Laser Scanning and Mobile Laser Scanning point clouds, along with panoramic images; (2) containing large-scale road and urban scenes in different cities (over 3. 2 × 1 0 6 m 2 area), with richly point-wise semantic instance labels (over 200 million points); (3) inclusion of particular attributes (e.g., reflected intensity, number of returns) in addition to 3D coordinates. This paper also provides the performance of several representative baseline methods and outlines potential future works and challenges for fully exploiting this dataset. The WHU-Urban3D dataset is publicly accessible at https://whu3d.com/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. A novel Building Section Skeleton for compact 3D reconstruction from point clouds: A study of high-density urban scenes.
- Author
-
Wu, Yijie, Xue, Fan, Li, Maosu, and Chen, Sou-Han
- Subjects
- *
POINT cloud , *BUILDING repair , *SKELETON , *ARCHITECTURAL designs , *URBAN growth , *SPACE - Abstract
Compact building models are demanded by global smart city applications, while high-definition urban 3D data is increasingly accessible by dint of the advanced reality capture technologies. Yet, existing building reconstruction methods encounter crucial bottlenecks against high-definition data of large scales and high-level complexity, particularly in high-density urban scenes. This paper proposes a Building Section Skeleton (BSS) to reflect architectural design principles about parallelism and symmetries. A BSS atom describes a pair of intrinsic parallel or symmetric points; a BSS segment clusters dense BSS atoms of a pair of symmetric surfaces; the polyhedra of all BSS segments further echo the architectural forms and reconstructability. To prove the concepts of BSS for automatic compact reconstruction, this paper presents a BSS method for building reconstruction that consists of one stage of BSS segments hypothesizing and another stage of BSS segments merging. Experiments and comparisons with four state-of-the-art methods have been conducted on 15 diverse scenes encompassing more than 60 buildings. Results confirmed that the BSS method achieves frontiers in compactness, robustness, geometric accuracy, and efficiency, simultaneously, especially for high-density urban scenes. On average, the BSS method reconstructed each scene into 623 triangles with a root-mean-square deviation (RMSD) of 0.82 m, completing the process in 110 s. First, the proposed BSS is an expressive 3D feature reflecting architectural designs in high-density cities, and can open new avenues to city modeling and other urban remote sensing and photogrammetry studies. Second, for practitioners in smart city development, the BSS method for building reconstruction offers an accurate and efficient approach to compact building and city modeling. The source code and tested scenes are available at https://github.com/eiiijiiiy/sobss. [Display omitted] • Building Section Skeleton (BSS) is proposed with novel definitions of BSS atoms and segments. • BSS revamps traditional shape skeletons to reflect architectural design principles about parallelism and symmetry. • A BSS method consisting of two stages is developed for compact building reconstruction from urban point clouds. • The BSS method of reconstruction was confirmed compact, robust, geometrically accurate, and efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. GN-GCN: Grid neighborhood-based graph convolutional network for spatio-temporal knowledge graph reasoning.
- Author
-
Han, Bing, Qu, Tengteng, and Jiang, Jie
- Abstract
Owing to the difficulty of utilizing hidden spatio-temporal information, spatio-temporal knowledge graph (KG) reasoning tasks in real geographic environments have issues of low accuracy and poor interpretability. This paper proposes a grid neighborhood-based graph convolutional network (GN-GCN) for spatio-temporal KG reasoning. Based on the discretized process of encoding spatio-temporal data through the GeoSOT global grid model, the GN-GCN consists of three parts: a static graph neural network, a neighborhood grid calculation, and a time evolution unit, which can learn semantic knowledge, spatial knowledge, and temporal knowledge, respectively. The GN-GCN can also improve the training accuracy and efficiency of the model through the multiscale aggregation characteristic of GeoSOT and can visualize different probabilities in a spatio-temporal intentional probabilistic grid map. Compared with other existing models (RE-GCN, CyGNet, RE-NET, etc.), the mean reciprocal rank (MRR) of GN-GCN reaches 48.33 and 54.06 in spatio-temporal entity and relation prediction tasks, increased by 6.32/18.16% and 6.64/15.67% respectively, which achieves state-of-the-art (SOTA) results in spatio-temporal reasoning. The source code of the project is available at https://doi.org/10.18170/DVN/UIS4VC. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
43. Classification of urban road functional structure by integrating physical and behavioral features.
- Author
-
Huang, Qiwen, Cui, Haifu, and Xiang, Longwei
- Abstract
Multisource data can extract diverse urban functional features, facilitating a deeper understanding of the functional structure of road networks. Street view images and taxi trajectories, as forms of urban geographic big data, capture features of the urban physical environment and travel behavior, serving as effective data sources for identifying the functional structure of urban spaces. However, street view and taxi trajectory data often suffer from sparse and uneven distributions, and the differences between features are relatively small in the process of multiple feature fusion, which poses significant challenges to accurate classification of road functions. To address these issues, this study proposes the use of the Louvain algorithm and triplet loss methods to enhance features at the community level, resolving the sparse data distribution problem. Simultaneously, the attention mechanism of the graph attention network is applied to dynamically adjust the feature weights within the road network, capturing subtle differences between different features. The experimental results demonstrate that the effectiveness of feature enhancement and capturing differences has improved the accuracy of calculating complex urban road functional structures. Additionally, this study analyzes the degree of mixing and distribution of road functions and explores the relationship between the road functional structure and traffic. The work in this paper assesses urban functional structure at the street level and provides decision-making support for urban planning at a fine scale. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
44. Accurate semantic segmentation of very high-resolution remote sensing images considering feature state sequences: From benchmark datasets to urban applications.
- Author
-
Wang, Zijie, Yi, Jizheng, Chen, Aibin, Chen, Lijiang, Lin, Hui, and Xu, Kai
- Abstract
Very High-Resolution (VHR) urban remote sensing images segmentation is widely used in ecological environmental protection, urban dynamic monitoring, fine urban management and other related fields. However, the large-scale variation and discrete distribution of objects in VHR images presents a significant challenge to accurate segmentation. The existing studies have primarily concentrated on the internal correlations within a single features, while overlooking the inherent sequential relationships across different feature state. In this paper, a novel Urban Spatial Segmentation Framework (UrbanSSF) is proposed, which fully considers the connections between feature states at different phases. Specifically, the Feature State Interaction (FSI) Mamba with powerful sequence modeling capabilities is designed based on state space modules. It effectively facilitates interactions between the information across different features. Given the disparate semantic information and spatial details of features at different scales, a Global Semantic Enhancer (GSE) module and a Spatial Interactive Attention (SIA) mechanism are designed. The GSE module operates on the high-level features, while the SIA mechanism processes the middle and low-level features. To address the computational challenges of large-scale dense feature fusion, a Channel Space Reconstruction (CSR) algorithm is proposed. This algorithm effectively reduces the computational burden while ensuring efficient processing and maintaining accuracy. In addition, the lightweight UrbanSSF-T, the efficient UrbanSSF-S and the accurate UrbanSSF-L are designed to meet different application requirements in urban scenarios. Comprehensive experiments on the UAVid, ISPRS Vaihingen and Potsdam datasets validate the superior performance of UrbanSSF series. Especially, the UrbanSSF-L achieves a mean intersection over union of 71.0% on the UAVid dataset. Code is available at https://github.com/KotlinWang/UrbanSSF. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
45. PolSAR2PolSAR: A semi-supervised despeckling algorithm for polarimetric SAR images.
- Author
-
Mendes, Cristiano Ulondu, Dalsasso, Emanuele, Zhang, Yi, Denis, Loïc, and Tupin, Florence
- Abstract
Polarimetric Synthetic Aperture Radar (PolSAR) imagery is a valuable tool for Earth observation. This imaging technique finds wide application in various fields, including agriculture, forestry, geology, and disaster monitoring. However, due to the inherent presence of speckle noise, filtering is often necessary to improve the interpretability and reliability of PolSAR data. The effectiveness of a speckle filter is measured by its ability to attenuate fluctuations without introducing artifacts or degrading spatial and polarimetric information. Recent advancements in this domain leverage the power of deep learning. These approaches adopt a supervised learning strategy, which requires a large amount of speckle-free images that are costly to produce. In contrast, this paper presents PolSAR2PolSAR, a semi-supervised learning strategy that only requires, from the sensor under consideration, pairs of noisy images of the same location and acquired in the same configuration (same incidence angle and mode as during the revisit of the satellite on its orbit). Our approach applies to a wide range of sensors. Experiments on RADARSAT-2 and RADARSAT Constellation Mission (RCM) data demonstrate the capacity of the proposed method to effectively reduce speckle noise and retrieve fine details. The code of the trained models is made freely available at https://gitlab.telecom-paris.fr/ring/polsar2polsar The repository additionally contains a model fine-tuned on SLC PolSAR images from NASA's UAVSAR sensor. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
46. Cross-view geolocalization and disaster mapping with street-view and VHR satellite imagery: A case study of Hurricane IAN.
- Author
-
Li, Hao, Deuser, Fabian, Yin, Wenping, Luo, Xuanshu, Walther, Paul, Mai, Gengchen, Huang, Wei, and Werner, Martin
- Abstract
Nature disasters play a key role in shaping human-urban infrastructure interactions. Effective and efficient response to natural disasters is essential for building resilience and sustainable urban environment. Two types of information are usually the most necessary and difficult to gather in disaster response. The first information is about the disaster damage perception, which shows how badly people think that urban infrastructure has been damaged. The second information is geolocation awareness, which means how people's whereabouts are made available. In this paper, we proposed a novel disaster mapping framework, namely CVDisaster, aiming at simultaneously addressing geolocalization and damage perception estimation using cross-view Street-View Imagery (SVI) and Very High-Resolution satellite imagery. CVDisaster consists of two cross-view models, where CVDisaster-Geoloc refers to a cross-view geolocalization model based on a contrastive learning objective with a Siamese ConvNeXt image encoder and CVDisaster-Est is a cross-view classification model based on a Coupled Global Context Vision Transformer (CGCViT). Taking Hurricane IAN as a case study, we evaluate the CVDisaster framework by creating a novel cross-view dataset (CVIAN) and conducting extensive experiments. As a result, we show that CVDisaster can achieve highly competitive performance (over 80% for geolocalization and 75% for damage perception estimation) with even limited fine-tuning efforts, which largely motivates future cross-view models and applications within a broader GeoAI research community. The data and code are publicly available at: https://github.com/tum-bgd/CVDisaster. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
47. Large-scale rice mapping under spatiotemporal heterogeneity using multi-temporal SAR images and explainable deep learning.
- Author
-
Ge, Ji, Zhang, Hong, Zuo, Lijun, Xu, Lu, Jiang, Jingling, Song, Mingyang, Ding, Yinhaibin, Xie, Yazhe, Wu, Fan, Wang, Chao, and Huang, Wenjiang
- Abstract
Timely and accurate mapping of rice cultivation distribution is crucial for ensuring global food security and achieving SDG2. From a global perspective, rice areas display high heterogeneity in spatial pattern and SAR time-series characteristics, posing substantial challenges to deep learning (DL) models' performance, efficiency, and transferability. Moreover, due to their "black box" nature, DL often lack interpretability and credibility. To address these challenges, this paper constructs the first SAR rice dataset with spatiotemporal heterogeneity and proposes an explainable, lightweight model for rice area extraction, the eXplainable Mamba UNet (XM-UNet). The dataset is based on the 2023 multi-temporal Sentinel-1 data, covering diverse rice samples from the United States, Kenya, and Vietnam. A Temporal Feature Importance Explainer (TFI-Explainer) based on the Selective State Space Model is designed to enhance adaptability to the temporal heterogeneity of rice and the model's interpretability. This explainer, coupled with the DL model, provides interpretations of the importance of SAR temporal features and facilitates crucial time phase screening. To overcome the spatial heterogeneity of rice, an Attention Sandglass Layer (ASL) combining CNN and self-attention mechanisms is designed to enhance the local spatial feature extraction capabilities. Additionally, the Parallel Visual State Space Layer (PVSSL) utilizes 2D-Selective-Scan (SS2D) cross-scanning to capture the global spatial features of rice multi-directionally, significantly reducing computational complexity through parallelization. Experimental results demonstrate that the XM-UNet adapts well to the spatiotemporal heterogeneity of rice globally, with OA and F1-score of 94.26 % and 90.73 %, respectively. The model is extremely lightweight, with only 0.190 M parameters and 0.279 GFLOPs. Mamba's selective scanning facilitates feature screening, and its integration with CNN effectively balances rice's local and global spatial characteristics. The interpretability experiments prove that the explanations of the importance of the temporal features provided by the model are crucial for guiding rice distribution mapping and filling a gap in the related field. The code is available in https://github.com/SAR-RICE/XM-UNet. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
48. Joint compression and despeckling by SAR representation learning.
- Author
-
Amao-Oliva, Joel, Foix-Colonier, Nils, and Sica, Francescopaolo
- Abstract
Synthetic Aperture Radar (SAR) imagery is a powerful and widely used tool in a variety of remote sensing applications. The increasing number of SAR sensors makes it challenging to process and store such a large amount of data. In addition, as the flexibility and processing power of on-board electronics increases, the challenge of effectively transmitting large images to the ground becomes more tangible and pressing. In this paper, we present a method that uses self-supervised despeckling to learn a SAR image representation that is then used to perform image compression. The intuition that despeckling will additionally improve the compression task is based on the fact that the image representation used for despeckling forms an image prior that preserves the main image features while suppressing the spatially correlated noise component. The same learned image representation, which can already be seen as the output of a data reduction task, is further compressed in a lossless manner. While the two tasks can be solved separately, we propose to simultaneously training our model for despeckling and compression in a self-supervised and multi-objective fashion. The proposed network architecture avoids the use of skip connections by ensuring that the encoder and decoder share only the features generated at the lowest network level, namely the bridge, which is then further transformed into a bitstream. This differs from the usual network architectures used for despeckling, such as the commonly used Deep Residual U-Net. In this way, our network design allows compression and reconstruction to be performed at two different times and locations. The proposed method is trained and tested on real data from the TerraSAR-X sensor (downloaded from https://earth.esa.int/eogateway/catalog/terrasar-x-esa-archive). The experiments show that joint optimization can achieve performance beyond the state-of-the-art for both despeckling and compression, represented here by the MERLIN and JPEG2000 algorithms, respectively. Furthermore, our method has been successfully tested against the cascade of these despeckling and compression algorithms, showing a better spatial and radiometric resolution, while achieving a better compression rate, e.g. a Peak Signal to Noise Ratio (PSNR) always higher than the comparison methods for any achieved bits-per-pixel (BPP) and specifically a PSNR gain of more than 2 dB by a compression rate of 0.7 BPP. • Learn a SAR image representation via self-supervised despeckling for compression. • Design a network with separate encoder and decoder for independent processing. • Design a variational autoencoder for the compression of the latent space. • Jointly optimize despeckling and compression for rate-distortion balance. • Improve self-supervised approach to handle areas with underdeveloped speckle. • Design a testbed to evaluate the method and algorithms for resolution preservation. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
49. MLC-net: A sparse reconstruction network for TomoSAR imaging based on multi-label classification neural network.
- Author
-
Ouyang, Depeng, Zhang, Yueting, Guo, Jiayi, and Zhou, Guangyao
- Abstract
Synthetic Aperture Radar tomography (TomoSAR) has garnered significant interest for its capability to achieve three-dimensional resolution along the elevation angle by collecting a stack of SAR images from different cross-track angles. Compressed Sensing (CS) algorithms have been widely introduced into SAR tomography. However, traditional CS-based TomoSAR methods suffer from weaknesses in noise resistance, high computational complexity, and insufficient super-resolution capabilities. Addressing the efficient TomoSAR imaging problem, this paper proposes an end-to-end neural network-based TomoSAR inversion method, named Multi-Label Classification-based Sparse Imaging Network (MLC-net). MLC-net focuses on the l0 norm optimization problem, completely departing from the iterative framework of traditional compressed sensing methods and overcoming the limitations imposed by the l1 norm optimization problem on signal coherence. Simultaneously, the concept of multi-label classification is introduced for the first time in TomoSAR inversion, enabling MLC-net to accurately invert scenarios with multiple scatterers within the same range-azimuth cell. Additionally, a novel evaluation system for TomoSAR inversion results is introduced, transforming inversion results into a 3D point cloud and utilizing mature evaluation methods for 3D point clouds. Under the new evaluation system, the proposed method is more than 30% stronger than existing methods. Finally, by training solely on simulated data, we conducted extensive experimental testing on both simulated and real data, achieving excellent results that validate the effectiveness, efficiency, and robustness of the proposed method. Specifically, the VQA_PC score improved from 91.085 to 92.713. The code of our network is available in https://github.com/OscarYoungDepend/MLC-net. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
50. PSO-based fine polarimetric decomposition for ship scattering characterization.
- Author
-
Wang, Junpeng, Quan, Sinong, Xing, Shiqi, Li, Yongzhen, Wu, Hao, and Meng, Weize
- Abstract
Due to the inappropriate estimation and inadequate awareness of scattering from complex substructures within ships, a reasonable, reliable, and complete interpretation tool to characterize ship scattering for polarimetric synthetic aperture radar (PolSAR) is still lacking. In this paper, a fine polarimetric decomposition with explicit physical meaning is proposed to reveal and characterize the local-structure-related scattering behaviors on ships. To this end, a nine-component decomposition scheme is first established through incorporating the rotated dihedral and planar resonator scattering models, which makes full use of polarimetric information and comprehensively considers the complex structure scattering of ships. In order to reasonably estimation the scattering components, three practical scattering dominance principles as well as an explicit objective function are raised, and a particle swarm optimization (PSO)-based model inversion strategy is subsequently presented. This not only overcomes the underdetermined problem, but also improves the scattering mechanism ambiguity by circumventing the constrained estimation order. Finally, a ship indicator by linearly combining the output scattering contribution is further derived, which constitutes a complete ship scattering interpretation approach along with the proposed decomposition. Experiments carried out with real PolSAR datasets demonstrate that the proposed method adequately and objectively describes the scatterers on ships, which provides an effective way to ship scattering characterization. Moreover, it also verifies the feasibility of fine polarimetric decomposition in a further application with the quantitative analysis of scattering components. [Display omitted] • Establishing fine nine-component decomposition to reveal ship scattering behavior. • Formulating PSO-based model inversion strategy to estimate ship scattering power. • Designing fine decomposition-based feature to indicate ship scattering significance. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.