393 results
Search Results
2. IEEE Transactions on Geoscience and Remote Sensing information for authors.
- Subjects
REMOTE sensing ,GEOLOGY ,EARTH sciences ,PERIODICAL publishing - Abstract
These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. IEEE Transactions on Geoscience and Remote Sensing information for authors.
- Subjects
REMOTE sensing ,GEOLOGY ,EARTH sciences - Abstract
The article offers information related to submitting papers, submission of a manuscripts, and copyright for the authors of the periodical.
- Published
- 2021
- Full Text
- View/download PDF
4. IEEE Transactions on Geoscience and Remote Sensing information for authors.
- Subjects
REMOTE sensing ,GEOLOGY ,EARTH sciences ,ACQUISITION of manuscripts - Abstract
Provides instructions and guidelines to prospective authors who wish to submit manuscripts. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. A Computational Electromagnetics and Sparsity-Based Feature Extraction Approach to Ground-Penetrating Radar Imaging.
- Author
-
Idriss, Zacharie, Raj, Raghu G., and Narayanan, Ram M.
- Subjects
GROUND penetrating radar ,FEATURE extraction ,RADAR targets ,COMPUTATIONAL electromagnetics ,GREEN'S functions ,MULTIPLE scattering (Physics) - Abstract
In this paper, a feature extraction technique based on the electromagnetic (EM) representation of radar signals is presented. In particular, we focus on ground-penetrating radar (GPR) imaging, where we model the backscatter from varying 2-D geometric shapes with arbitrary local coordinate rotations. Due to the electrically small nature of buried targets and the bending of the radar signal at the air–soil interface, we focus on exact methods to model the surface current density induced on scattering surfaces. Overcomplete basis sets are derived from the EM descriptions to represent the scene sparsely. From this proposed modeling framework, we devise a novel methodology to exploit the prediction of scattering behavior to extract features for classification from radar scenes when multiple buried scattering surfaces are present. We see that our method can identify and reconstruct buried scattering geometries in the presence of false targets that are brought about by the nonlinear nature of the exact EM modeling methods. A noniterative algorithm based on the conjugate of Green’s function is developed to solve for the surface current in an unknown domain using multifrequency, multiaperture data. Our modeling and feature extraction algorithms are numerically validated for different target shapes buried in lossy soil profiles. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. A New Scheme for Gravity Data Interpretation by a Faulted 2-D Horizontal Thin Block: Theory, Numerical Examples, and Real Data Investigation.
- Subjects
GRAVITY ,NUMERICAL analysis ,ABSOLUTE value ,MATHEMATICAL optimization ,GRABENS (Geology) - Abstract
A nonlinear optimization algorithm has been described for the inversion of gravity data profile by a faulted 2-D horizontal thin block. The algorithm simultaneously optimizes for the depth to the center ($z$) of the faulted block, the amount and direction of dip ($\theta $) of the fault plane, and the amplitude coefficient ($A$), which is dependent on the amount of throw $t$ and the density contrast $\Delta \rho $ of the block. The objective functional of this algorithm coupled with both the space of logarithmed absolute values of the observed and predicted gravity data and the space of logarithmed [ $\log (z)$ , $\log (|A|)$ , and $\log (\theta $)] model parameters is the basis of the new inversion scheme introduced here. It has been found essential that the objective functional of this scheme be formulated in this particular combination so that the iterative solver/minimizer (the Gauss–Newton (GN) method) can converge. The developed scheme has been successfully verified on numerical models without noise and achieved superior convergence. It is found stable and can determine the inverse parameters of the faulted block with acceptable accuracy when applied to data contaminated with insignificant noise levels and/or geologic interference. In order to investigate the usefulness of the developed scheme, a published gravity profile has been inverted and analyzed, suggesting new results that are of some geologic significance. The computational efficiency, thorough analysis of the investigated numerical examples, and comparisons of the real data inverted in this paper have demonstrated that the scheme developed here is advantageous to the existing gravity data inversion schemes that solve for the characteristic inverse parameters of a faulted thin block. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. A Feature Fusion-Net Using Deep Spatial Context Encoder and Nonstationary Joint Statistical Model for High-Resolution SAR Image Classification.
- Author
-
Liang, Wenkai, Wu, Yan, Li, Ming, and Cao, Yice
- Subjects
STATISTICAL models ,SYNTHETIC aperture radar ,DISTRIBUTION (Probability theory) ,STATISTICS ,GABOR transforms ,GAUSSIAN distribution ,CHANNEL coding ,INTRACLASS correlation - Abstract
The nonstationary and non-Gaussian distribution of the high-resolution (HR) synthetic aperture radar (SAR) image provides much valuable information. However, the current methods, especially deep learning models, directly learn spatial features from HR SAR data while ignoring global statistical information. Combining the local spatial features and global statistical properties of HR SAR images is urgently needed to capture complete HR SAR characteristics. In this paper, a feature fusion network (Fusion-Net) using both deep spatial context encoder and nonstationary joint statistical model (NS-JSM) is proposed for the first time. Fusion-Net realizes the fusion description of local spatial and global statistical features in an end-to-end supervised classification framework. First, a deep spatial context encoder network (DSCEN) is designed based on multiscale group convolution (MSGC) module and channel attention (CA) module. The DSCEN expands the scope of context information extraction with few parameters and increases the interaction between high- level feature channels. Then, the NS-JSM is adopted to capture the unique SAR statistical information. Specifically, the SAR image is transformed into the Gabor wavelet domain. The produced sub-band magnitudes and phases are modeled by the log-normal and uniform distribution. The covariance matrix (CM) is calculated for mapped sub-band data to capture the interscale and intrascale nonstationary correlation. Finally, the group compression and smooth normalization units are introduced into Fusion-Net to fuse the statistical features and spatial features, which not only exploits the complementary information between different features but also optimizes the fusion feature representation. Experiments on four real HR SAR images validate the superiority of the proposed method over other related algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Unsupervised Domain Adaptation for Cloud Detection Based on Grouped Features Alignment and Entropy Minimization.
- Author
-
Guo, Jianhua, Yang, Jingyu, Yue, Huanjing, and Li, Kun
- Subjects
ENTROPY ,CONVOLUTIONAL neural networks ,SUPERVISED learning ,REMOTE sensing ,REMOTE-sensing images - Abstract
Most convolutional neural network (CNN)-based cloud detection methods are built upon the supervised learning framework that requires a large number of pixel-level labels. However, it is expensive and time-consuming to manually annotate pixelwise labels for massive remote sensing images. To reduce the labeling cost, we propose an unsupervised domain adaptation (UDA) approach to generalize the model trained on labeled images of source satellite to unlabeled images of the target satellite. To effectively address the domain shift problem on cross-satellite images, we develop a novel UDA method based on grouped features alignment (GFA) and entropy minimization (EM) to extract domain-invariant representations to improve the cloud detection accuracy of cross-satellite images. The proposed UDA method is evaluated on “Landsat- $8~\rightarrow $ ZY-3” and “GF- $1\rightarrow $ ZY-3” domain adaptation tasks. Experimental results demonstrate the effectiveness of our method against existing state-of-the-art UDA approaches. The code of this paper has been made available online (https://github.com/nkszjx/grouped-features-alignment). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. DDU-Net: Dual-Decoder-U-Net for Road Extraction Using High-Resolution Remote Sensing Images.
- Author
-
Wang, Ying, Peng, Yuexing, Li, Wei, Alexandropoulos, George C., Yu, Junchuan, Ge, Daqing, and Xiang, Wei
- Subjects
ARTIFICIAL neural networks ,REMOTE sensing - Abstract
Extracting roads from high-resolution remote sensing images (HRSIs) is vital in a wide variety of applications, such as autonomous driving, path planning, and road navigation. Due to the long and thin shape as well as the shades induced by vegetation and buildings, small-sized roads are more difficult to discern. In order to improve the reliability and accuracy of small-sized road extraction when roads of multiple sizes coexist in an HRSI, an enhanced deep neural network model termed dual-decoder-U-net (DDU-Net) is proposed in this article. Motivated by the U-Net model, a small decoder is added to form a dual-decoder structure for more detailed features. In addition, we introduce the dilated convolution attention module (DCAM) between the encoder and decoders to increase the receptive field as well as to distill multiscale features through cascading dilated convolution and global average pooling. The convolutional block attention module (CBAM) is also embedded in the parallel dilated convolution and pooling branches to capture more attention-aware features. Extensive experiments are conducted on the Massachusetts Roads dataset with experimental results showing that the proposed model outperforms the state-of-the-art DenseUNet, DeepLabv3+, and D-LinkNet by 6.5%, 3.3%, and 2.1% in the mean intersection over union (mIoU), and by 4%, 4.8%, and 3.1% in the F1 score, respectively. Both ablation and heatmap analysis are presented to validate the effectiveness of the proposed model. Moreover, the designed small decoder and introduced DCAM can be used as a portable module to be embedded in other U-Net-like models with encoder–decoder structure to enhance the road detection performance, especially for small-sized roads. The high portability of the designed module is validated by embedding it in the LinkNet, which greatly improves the road segmentation performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. BSNet: Dynamic Hybrid Gradient Convolution Based Boundary-Sensitive Network for Remote Sensing Image Segmentation.
- Author
-
Hou, Jianlong, Guo, Zhi, Wu, Youming, Diao, Wenhui, and Xu, Tao
- Subjects
REMOTE sensing ,DATA mining ,INFORMATION modeling ,FEATURE extraction ,IMAGE enhancement (Imaging systems) - Abstract
Boundary information is essential for the semantic segmentation of remote sensing images. However, most existing methods were designed to establish strong contextual information while losing detailed information, making it challenging to extract and recover boundaries accurately. In this article, a boundary-sensitive network (BSNet) is proposed to address this problem via dynamic hybrid gradient convolution (DHGC) and coordinate sensitive attention (CSA). Specifically, in the feature extraction stage, we propose DHGC to replace vanilla convolution (VC), which adaptively aggregates one VC kernel and two gradient convolution kernels (GCKs) into a new operator to enhance boundary information extraction. The GCKs are proposed to explicitly encode boundary information, which is inspired by traditional Sobel operators. In the feature recovery stage, the CSA is introduced. This module is used to reconstruct the sharp and detailed segmentation results by adaptively modeling the boundary information and long-range dependencies in the low-level features as the assistance of high-level features. Note that DHGC and CSA are plug-and-play modules. We evaluate the proposed BSNet on three public datasets: the ISPRS 2-D semantic labeling Vaihingen, the Potsdam benchmark, and the iSAID dataset. The experimental results indicate that BSNet is a highly effective architecture that produces sharper predictions around object boundaries and significantly improves the segmentation accuracy. Our method demonstrates superior performance on the Vaihingen, the Potsdam benchmark, and the iSAID dataset in terms of the mean $F_{1}$ , with improvements of 4.6%, 2.3%, and 2.4% over strong baselines, respectively. The code and models will be made publicly available. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Error Analysis for Digital Beamforming Synthetic Aperture Radars: A Comparison of Phased Array and Array-Fed Reflector Systems.
- Author
-
Huber, Sigurd, Younis, Marwan, Krieger, Gerhard, and Moreira, Alberto
- Subjects
PHASED array radar ,PHASED array antennas ,REFLECTOR antennas ,PLANAR antennas ,SYNTHETIC apertures ,BEAMFORMING ,SYNTHETIC aperture radar - Abstract
Modern synthetic aperture radar (SAR) systems for Earth observation from space employ innovative hardware concepts. The key idea is to digitize the output of a multielement antenna almost immediately after the receiver and to dynamically process these data either onboard the radar satellite in real time or on the ground. This article addresses the performance of such digital beamforming (DBF) systems in the presence of phase and magnitude errors in the digital channels. For this, analytic expressions for the sensitivity and range ambiguity performance are derived. These equations are kept general so that they are valid for both planar array antennas and array-fed reflector antennas. It is an important objective of this article to compare these two antenna types to each other. A major conclusion from this analysis is that direct-radiating phased arrays are inherently more susceptible to random phase and magnitude errors compared with array-fed reflector antenna-based systems. This manifests itself in a more rapid degradation of the imaging performance with phased array antennas. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Corrections to “Iterative Atmospheric Phase Screen Compensation for Near-Real-Time Ground-Based InSAR Measurements Over a Mountainous Slope”.
- Author
-
Izumi, Yuta, Zou, Lilong, Kikuta, Kazutaka, and Sato, Motoyuki
- Subjects
INFORMATION display systems ,MEASUREMENT - Abstract
In the above paper , there are errors in the following two : [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Resolution Enhancement for Large-Scale Real Beam Mapping Based on Adaptive Low-Rank Approximation.
- Author
-
Zhang, Yongchao, Luo, Jiawei, Zhang, Yongwei, Huang, Yulin, Cai, Xiaochun, Yang, Jianyu, Mao, Deqing, Li, Jie, Tuo, Xingyu, and Zhang, Yin
- Subjects
MICROWAVE remote sensing ,SINGULAR value decomposition ,MATRIX inversion ,VECTOR spaces ,RANDOM matrices - Abstract
Recently, a variety of super-resolution (SR) methods have been devoted to enhancing the angular resolution of real beam mapping (RBM) imagery in modern microwave remote sensing applications. When addressing large-scale datasets, however, they suffer from notably high computational complexity due to high-dimensional matrix inversion, multiplication, or singular value decomposition (SVD). To overcome this limitation, this article presents a low-complexity SR strategy based on adaptive low-rank approximation (LRA). Our underlying idea is first to construct a random matrix sketching to sample the raw echo measurements and restore the surface map of reflectivity in a low-dimensional linear space. The resulting low-complexity strategy enables substantial computational complexity reduction for a group of SR methods, at the cost of introducing a manually adjusted LRA parameter. Using the Fourier transform-based antenna analysis method, we further reveal that the LRA parameter that ensures support resolution improvement can be determined by a closed-form function of the aperture length, the wavelength, and the field of view, allowing for adaptively and efficiently selecting the optimal LRA parameter that well balances the tradeoff between LRA error and computational efficiency. We use both simulated and real datasets to demonstrate that the proposed LRA-based SR strategy can provide significant speedup without performance loss. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. On the Relationship Between Stickiness in DMRT Theory and Physical Parameters of Snowpack: Theoretical Formulation and Experimental Validation With SNOWPACK Snow Model and X-Band SAR Data.
- Author
-
Pilia, Simone, Baroni, Fabrizio, Lapini, Alessandro, Paloscia, Simonetta, Pettinato, Simone, Santi, Emanuele, Pampaloni, Paolo, Valt, Mauro, and Monti, Fabiano
- Subjects
SYNTHETIC aperture radar ,MIE scattering ,APPROXIMATION theory ,SPHERE packings ,RADIATIVE transfer ,FRACTIONS - Abstract
This study aims at relating the stickiness parameter ($\tau$) of the dense media radiative transfer theory in quasi-crystalline approximation of Mie scattering of densely packed sticky spheres (DMRT-QMS), to the physical parameters of the layered snowpack. A relationship has been derived to express $\tau $ , which modulates the attractive contact force between ice spheres, as a function of ice volume fraction ($\phi $) and coordination number ($n_{c}$). Since $\tau $ is not a measurable parameter, this is a step forward with respect to what is commonly made in the literature, where $\tau $ is assumed as an arbitrary parameter, generally ranging between 0.1 and 0.3, to fit simulated backscattering data with those measured. As a first validation, DMRT-QMS was integrated with the SNOWPACK model to simulate backscattering at X-band (9.6 GHz) driven by nivo-meteorological data acquired on a test area located in Monti Alti di Ornella, Italy. The simulations were compared with Synthetic Aperture Radar COSMO-SkyMed (CSK) satellite observations. The results show a significant agreement ($R^{2} =0.68$), although for a limited dataset of eight points in a unique winter season. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. A 3-D-Swin Transformer-Based Hierarchical Contrastive Learning Method for Hyperspectral Image Classification.
- Author
-
Huang, Xin, Dong, Mengjie, Li, Jiayi, and Guo, Xian
- Subjects
IMAGE representation ,CONVOLUTIONAL neural networks ,SUPERVISED learning ,IMAGE analysis ,COMPLEX variables - Abstract
Deep convolutional neural networks have been dominating in the field of hyperspectral image (HSI) classification. However, single convolutional kernel can limit the receptive field and fail to capture the sequential properties of data. The self-attention-based Transformer can build global sequence information, among which the Swin Transformer (SwinT) integrates sequence modeling capability and prior information of the visual signals (e.g., locality and translation invariance). Based on SwinT, we propose a 3-D SwinT (3DSwinT) to accommodate the 3-D properties of HSI and capture the rich spatial–spectral information of HSI. Currently, supervised learning is still the most commonly used method for remote sensing image interpretation. However, pixel-by-pixel HSI classification demands a large number of high-quality labeled samples that are time-consuming and costly to collect. As unsupervised learning, self-supervised learning (SSL), especially contrastive learning, can learn semantic representations from unlabeled data and, hence, is becoming a potential alternative to supervised learning. On the other hand, current contrastive learning methods are all single level or single scale, which do not consider complex and variable multiscale features of objects. Therefore, this article proposes a novel 3DSwinT-based hierarchical contrastive learning (3DSwinT-HCL) method, which can fully exploit multiscale semantic representations of images. Besides, we propose a multiscale local contrastive learning (MS-LCL) module to mine the pixel-level representations in order to adapt to downstream dense prediction tasks. A series of experiments verify the great potential and superiority of 3DSwinT-HCL. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. A Feature Decomposition-Based Method for Automatic Ship Detection Crossing Different Satellite SAR Images.
- Author
-
Zhao, Siyuan, Luo, Ying, Zhang, Tao, Guo, Weiwei, and Zhang, Zenghui
- Subjects
SYNTHETIC aperture radar ,REMOTE-sensing images ,FEATURE extraction ,SUPERVISED learning ,DECOMPOSITION method - Abstract
In the face of synthetic aperture radar (SAR) image object detection with different distributions of training and test data, traditional supervised learning methods cannot achieve good detection performance. Domain adaptation (DA) method has been shown to have the ability to solve this problem, but existing DA object detection algorithms all use adversarial DA theory for the detection task, which is ineffective in solving object regression localization in the detection task. In this article, to better solve the above problem, an automatic SAR image ship detection method based on feature decomposition crossing different satellites is proposed. The feature extraction layer of backbone network is divided into low level and high level, where domain-invariant feature (DIF) extractors are designed for the local features extracted from the low level and the global features extracted from the high level, respectively. We argue that the local and global features extracted from source domain and target domain contain domain-specific features (DSF) for adversarial DA and DIFs that contribute to object regression localization. Then, we decompose the local features and global features into DSF and DIF via vector decomposition method. For DSF counterpart, we introduce adversarial DA attention for feature alignment. DIF from the local features are fused into the backbone network for high-level global feature extraction. Finally, using region proposal network and adversarial domain classifier, we can get the accurate bounding box and object class of SAR image objects. Extensive experiments prove that the proposed method outperforms the state-of-the-art methods in terms of detection performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. All Grains, One Scheme (AGOS): Learning Multigrain Instance Representation for Aerial Scene Classification.
- Author
-
Bi, Qi, Zhou, Beichen, Qin, Kun, Ye, Qinghao, and Xia, Gui-Song
- Subjects
CONVOLUTIONAL neural networks ,COMPUTER vision - Abstract
Aerial scene classification remains challenging as: 1) the size of key objects in determining the scene scheme varies greatly and 2) many objects irrelevant to the scene scheme are often flooded in the image. Hence, how to effectively perceive the region of interests (RoIs) from a variety of sizes and build more discriminative representation from such complicated object distribution is vital to understand an aerial scene. In this article, we propose a novel all grains, one scheme (AGOS) framework to tackle these challenges. To the best of our knowledge, it is the first work to extend the classic multiple instance learning (MIL) into multigrain formulation. Specifically, it consists of a multigrain perception (MGP) module, a multibranch multi-instance representation (MBMIR) module, and a self-aligned semantic fusion (SSF) module. First, our MGP module preserves the differential dilated convolutional features from the backbone, which magnifies the discriminative information from multigrains. Then, our MBMIR module highlights the key instances in the multigrain representation under the MIL formulation. Finally, our SSF module allows our framework to learn the same scene scheme from multigrain instance representations and fuses them, so that the entire framework is optimized as a whole. Notably, our AGOS is flexible and can be easily adapted to existing convolutional neural networks (CNNs) in a plug-and-play manner. Extensive experiments on UCM, aerial image dataset (AID), and Northwestern Polytechnical University (NWPU) benchmarks demonstrate that our AGOS achieves a comparable performance against the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. COCO-Net: A Dual-Supervised Network With Unified ROI-Loss for Low-Resolution Ship Detection From Optical Satellite Image Sequences.
- Author
-
Xu, Qizhi, Li, Yuan, Zhang, Mingjin, and Li, Wei
- Subjects
OPTICAL remote sensing ,REMOTE-sensing images ,OPTICAL images ,REMOTE sensing ,RADARSAT satellites ,SHIPS ,LANDSAT satellites ,CLOUDINESS - Abstract
Low-resolution ship detection from optical satellite image sequences is critical in high-orbit remote sensing satellite applications. However, it is still a difficult problem due to the following challenges: 1) the size of the ship is tiny in the low-resolution image; 2) the ship target is dim and the contrast with the background is low; and 3) the interference of cloud and fog covering is complex and changeable. For these reasons, the targets are easily lost during the detection. In fact, the Clearer the Objects against to the background, the more Confident the Observers can detect it. In light of these considerations, we propose a COCO-Net to detect the small dynamic objects on low-resolution images in this article. First, the multiframe images are associated by introducing motion information as an effective compensation for small object features. Second, an integrated dual-supervised network that processes single-level tasks hierarchically is presented to adaptively enhance the input data quality of object detection without being limited by diverse scene disturbances. Third, a unified region of interest (ROI)-loss scheme that modulates the loss function of the first component by introducing ROI-masks from the second component is utilized to make the first component also work for object detection. In addition, we construct a new dataset for the small dynamic object detection based on the GaoFen-4 satellite imagery. Comprehensive experiments on a self-assembled dataset from the GaoFen-4 satellite show the superior performance of the proposed method compared to state-of-the-art object detectors. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Flash Floods Prediction Using Precipitable Water Vapor Derived From GPS Tropospheric Path Delays Over the Eastern Mediterranean.
- Author
-
Ziv, Shlomi Ziskin and Reuveni, Yuval
- Subjects
PRECIPITABLE water ,GLOBAL Positioning System ,FLOOD warning systems ,ARID regions ,RAINFALL ,WATER use ,TROPOSPHERIC chemistry - Abstract
A flash flood is a rapid and intense response of a drainage area to heavy rainfall events. In the arid and semiarid parts of the Eastern Mediterranean (EM) region, the spatiotemporal distribution of rainfall is the most important factor for flash flood generation. A possible precursor to heavy rainfall events is the rise in tropospheric water vapor amount, which can be remotely sensed using ground-based global navigation satellite system (GNSS) stations. Here, we use the precipitable water vapor (PWV) derived from nine GNSS ground-based stations in the arid part of the EM region in order to predict flash floods. Our approach includes using three types of machine learning (ML) models in a binary classification task, which predicts whether a flash flood will occur given 24 h of PWV data. We train our models with 107 unique flash flood events and vigorously test them using a nested cross-validation technique. The results indicate a good agreement between all three types of models and across various score metrics. In addition, the models are further improved by adding more features such as surface pressure measurements. Finally, a feature importance analysis shows that the most important features are the PWV values from 2 to 6 h prior to a flash flood. These promising results indicate that it is possible to augment the current flash flood warning systems with a near real-time GNSS ground-based data-driven approach as demonstrated in this work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Local Semantic Feature Aggregation-Based Transformer for Hyperspectral Image Classification.
- Author
-
Tu, Bing, Liao, Xiaolong, Li, Qianming, Peng, Yishu, and Plaza, Antonio
- Subjects
CONVOLUTIONAL neural networks ,CLASSIFICATION - Abstract
Hyperspectral images (HSIs) contain abundant information in the spatial and spectral domains, allowing for a precise characterization of categories of materials. Convolutional neural networks (CNNs) have achieved great success in HSI classification, owing to their excellent ability in local contextual modeling. However, CNNs suffer from fixed filter weights and deep convolutional layers, which lead to a limited receptive field and high computational burden. The recent vision transformer (ViT) models long-range dependencies with a self-attention mechanism and has been an alternative backbone to CNNs traditionally used in HSI classification. However, such transformer-based architectures designate all the input pixels of the receptive field as feature tokens in terms of feature embedding and self-attention, which inevitably limits the ability for learning multiscale features and increases the computational cost. To overcome this issue, we propose a local semantic feature aggregation-based transformer (LSFAT) architecture which allows transformers to represent long-range dependencies of multiscale features more efficiently. We introduce the concept of the homogeneous region into the transformer by considering a pixel aggregation strategy and further propose neighborhood-aggregation-based embedding (NAE) and attention (NAA) modules, which are able to adaptively form multiscale features and capture locally spatial semantics among them in a hierarchical transformer architecture. A reusable classification token is included together with the feature tokens in the attention calculation. In the last stage, a fully connected layer is used to perform classification on the reusable token after transformer encoding. We verify the effectiveness of the NAE and NAA modules compared with the traditional ViT through extensive experiments. Our results demonstrate the excellent classification performance of the proposed method in comparison to other state-of-the-art approaches on several public HSIs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Learning Orientation Information From Frequency-Domain for Oriented Object Detection in Remote Sensing Images.
- Author
-
Zheng, Shangdong, Wu, Zebin, Xu, Yang, Wei, Zhihui, and Plaza, Antonio
- Subjects
OBJECT recognition (Computer vision) ,REMOTE sensing ,FEATURE extraction ,FEATURE selection ,SIGNAL processing - Abstract
Object detection in remote sensing images (RSIs) poses great difficulties due to arbitrary orientations, various scales, and dense location of the targets over the ground. Recent evidence suggests that encoding the orientation information is of great use for training an accurate object detector for oriented object detection (OOD). In this article, we propose a new frequency-domain orientation learning (FDOL) module with two main components: the frequency-domain feature extraction (FFE) network and an orientation enhanced self-attention layer (OES-Layer). The FFE network models the interactions among spatial locations in the frequency domain to determine the frequency of spatial features. Then, these features are fed into our OES-Layer to learn the orientation information. Moreover, the orientation weights are adopted to guide the feature selection in a self-attention (SA) architecture, using them as a control gate to emphasize the spatial responses of target instances. Considering that the original similarity weights (calculated by the SA algorithm) do not distinctly model the orientation variation, the considered orientation weights provide an efficient asset to emphasize the orientation of objects. Extensive experiments on the DOTA and HRSC2016 datasets demonstrate that our method achieves state-of-the-art performance among single-scale methods while achieving competitive performance over multiscale methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. A Range-Doppler Method for Focusing Radar Sounder Data Generated by Coherent Electromagnetic Simulators.
- Author
-
Sbalchiero, Elisa, Thakur, Sanchari, and Bruzzone, Lorenzo
- Subjects
RADAR ,SYNTHETIC aperture radar ,SPACE-based radar - Abstract
Radar sounders (RSs) are gaining importance in planetary missions thanks to their unique capability of providing direct measurements of subsurface (SS) structures. To support their design and data interpretation, several electromagnetic (e.m.) simulation techniques have been developed with enhanced capabilities for emulating the RS acquisition process. However, the raw simulated radargrams obtained from e.m. simulators are difficult to interpret and analyze without a focusing operation, which results in an underestimation of the RS detection performance. While frequency methods for range and azimuth compression of real RS data are well-established, their use on simulated data is not addressed in the literature and requires major modifications. This article presents a novel method that implements azimuth compression using unfocused and focused processing on simulated raw data. The proposed method is based on an adaptation of the range-Doppler algorithm to the case of raw data generated by a coherent RS simulator. The method is demonstrated in three case studies to show the similarity between simulated and real data processing: 1) simple geometries; 2) a simulated SHAllow RADar (SHARAD) radargram compared with the real data product; and 3) a real application scenario for supporting the design of a new RS instrument. The results indicate higher fidelity of the focused simulated data with the real data product and the target structure, confirming the usefulness of the proposed approach in obtaining realistic processing of simulated radargrams. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Improving the Gross Primary Productivity Estimate by Simulating the Maximum Carboxylation Rate of the Crop Using Machine Learning Algorithms.
- Author
-
Yuan, Dekun, Zhang, Sha, Li, Haojie, Zhang, Jiahua, Yang, Shanshan, and Bai, Yun
- Subjects
MACHINE learning ,PRIMARY productivity (Biology) ,CARBOXYLATION ,STANDARD deviations ,CONVOLUTIONAL neural networks ,KALMAN filtering - Abstract
The current regional-scale process-based photosynthesis models use biome-specified values of maximum carboxylation rate at 25 °C ($V_{m25}$) in simulating ecosystem gross primary productivity (GPP). These models ignore the variations in $V_{m25}$ over time and space, resulting in substantial errors in regional estimates of cropland GPP. Thus, to resolve this problem, we used the ensemble Kalman filter (EnKF) to assimilate tower-based GPP from five maize flux sites into a process-based mode to obtain the “apparent” value of $V_{m25}$ and then modeled this parameter using machine learning (ML) algorithms. The results showed that $V_{m25}$ increased during the early growing season and then decreased after reaching a peak value in the middle of the growing season. The coefficient of determination ($R^{2}$) root mean square error (RMSE) for satellite-driven coupled photosynthesis and evapotranspiration simulator (SCOPES)-Crop with EnKF-derived varied $V_{m25}$ in simulating daily GPP across all site-days increased (decreased) by 0.17 (5.63 $\mu \text {mol}\,\text {m}^{-2}\,\text {s}^{-1}$) on average compared to that for the model with fixed $V_{m25}$. We used four ML algorithms, namely artificial neural network, random forest, extreme gradient enhancement, and convolutional neural network (CNN), to model the $V_{m25}$ of maize. The CNN algorithm yielded the best results. The average of the $R^{2}$ (RMSE) values of simulated GPP using CNN-based $V_{m25}$ over the three flux sites is 0.93 (1.95 $\mu \text {mol}\,\text {m}^{-2}\,\text {s}^{-1}$), higher (smaller) than that using fixed $V_{m25}$. This study implies that representing the seasonal variations in $V_{m25}$ can facilitate improved estimates of GPP and the ML methods are useful tools for modeling the variation in $V_{m25}$. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. A Geometry-Discrete Minimum Reflectance Aerosol Retrieval Algorithm (GeoMRA) for Geostationary Meteorological Satellite Over Heterogeneous Surfaces.
- Author
-
Zhang, Tianhao, Wang, Lunche, Zhao, Bin, Gu, Yu, Wong, Man Sing, She, Lu, Xia, Xinghui, Dong, Jiadan, Ji, Yuxi, Gong, Wei, and Zhu, Zhongmin
- Subjects
GEOSTATIONARY satellites ,MODIS (Spectroradiometer) ,METEOROLOGICAL satellites ,AEROSOLS ,REFLECTANCE - Abstract
High-frequency aerosol observation from a new-generation geostationary meteorological satellite is capable to capture and monitor the spatiotemporal dynamic variation of aerosols, which is of vital significance to environmental research and climate studies. Due to the diversity and complexity of land cover, it is a challenge to retrieve aerosol properties with high accuracy over land, especially over heterogeneous land surfaces. In this study, a geometry-discrete minimum reflectance aerosol retrieval algorithm (GeoMRA) has been proposed to retrieve 10-min high temporal resolution aerosol optical depth (AOD) datasets for geostationary Himawari-8 Advanced Himawari Imager (AHI) sensor, aiming at providing universal bidirectional reflectance distribution function (BRDF) descriptions for different land surfaces with different heterogeneous extent. The AOD retrievals from GeoMRA demonstrate good consistency against the ground-based AERONET measurements in East Asia from 2015 to 2020, with a correlation coefficient (${R}$) of 0.883 and approximately 65.6% of matchups falling within the expected error envelope of ±(0.05% +15%). Intercomparison between the GeoMRA retrieved AOD and other operational AOD products shows that the GeoMRA AOD retrievals, which generally possess similar spatial distribution and accuracy as Moderate Resolution Imaging Spectroradiometer (MODIS) AOD products, have better performances than the Japan Aerospace Exploration Agency (JAXA) AOD products by providing more accurate AOD retrievals with higher spatial coverage. Moreover, the AOD bias analyses further demonstrate the robustness of GeoMRA algorithm, and an extreme haze event shows that the continuous GeoMRA AOD images illustrate smoother temporal variations than JAXA AOD products, demonstrating its efficacy and reliability in capturing the process of haze transport and monitoring the continuous spatiotemporal variation of aerosol. The above results suggest the considerable accuracy of GeoMRA algorithm for scientific application requirement and demonstrate the robustness of the proposed BRDF scheme in describing heterogeneous surfaces with diverse reflectance distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. SIL-LAND: Segmentation Incremental Learning in Aerial Imagery via LAbel Number Distribution Consistency.
- Author
-
Li, Junxi, Diao, Wenhui, Lu, Xiaonan, Wang, Peijin, Zhang, Yidan, Yang, Zhujun, Xu, Guangluan, and Sun, Xian
- Subjects
MACHINE learning ,BRAIN-computer interfaces ,PROTOTYPES - Abstract
Segmentation incremental learning (SIL) has received a lot of attention in recent years due to the ability to overcome the problem of catastrophic forgetting. Our study found that differences in label number distribution (LAND) affect the performance of SIL. Because the labels for pixels of the old category are marked as background when the model is trained on the new tasks, the LAND is inconsistent with static learning that is considered to be the upper bound on incremental learning, which hinders the mitigation of the catastrophic forgetting problem. In response to the above problems, we propose an incremental learning method named SIL-LAND, which improves the accuracy by making the LAND of our method close to that of static learning. From the perspective of high-level semantic labels, we propose the prototype update mechanism for the problem that nonadaptive representative prototypes ignore the sample diversity of semantic categories in remote sensing images. By compensating for the difference in LAND at the feature level, the distance between the prototype and the actual class center is reduced; aiming at the lack of semantic consistency between feature vectors and prototypes, we propose a similarity measure module to increase the intraclass similarity between the prototype and the corresponding feature vectors. From the perspective of one-hot labels, we propose label reconstruction, including foreground screening and background padding to make the number distribution of one-hot labels as close as possible to that of static learning. A series of experimental results demonstrates the effectiveness of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. An Underground Pipeline Mapping Method Based on Fusion of Multisource Data.
- Author
-
Zhou, Xiren, Chen, Qiuju, Jiang, Bingbing, and Chen, Huanhuan
- Subjects
UNDERGROUND pipelines ,PIPELINE failures ,MULTISENSOR data fusion ,REMOTE sensing ,GLOBAL Positioning System ,PIPELINES - Abstract
There is a need to map underground pipelines due to nonavailable existing pipeline maps caused by poor management of statutory records and insufficient updating of documentation whenever pipeline construction or rerouting occurs. By fusing multisource data, a novel method to map underground pipelines is proposed in this article. Statutory records of the underground pipelines are converted into the initial pipeline map. Pipeline information obtained from manhole covers and remote sensing technologies are normalized into the pipeline dataset composed of detected points. The probabilistic pipeline mapping model (PPMM) is then proposed to map the buried pipelines from the conducted pipeline dataset, with or without statutory pipeline records. In this model, each detected point is classified into the specific pipeline that most likely generates the data of this point, and detected points generated from the same pipeline are fit to revise the pipelines’ locations and directions. The above classification and fitting operations are performed iteratively, and PPMM would output the pipeline map with the highest probability. Experimental studies on real-world datasets are conducted and analyzed, and the obtained results demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Remote Sensing Change Detection via Temporal Feature Interaction and Guided Refinement.
- Author
-
Li, Zhenglai, Tang, Chang, Wang, Lizhe, and Zomaya, Albert Y.
- Subjects
SPATIAL resolution ,TIME-varying networks ,FEATURE extraction ,PIXELS - Abstract
Remote sensing change detection (RSCD), which identifies the changed and unchanged pixels from a registered pair of remote sensing images, has enjoyed remarkable success recently. However, locating changed objects with fine structural details is still a challenging problem in RSCD. In this article, we propose a novel RSCD network via temporal feature interaction and guided refinement (TFI-GR) to solve this issue. Specifically, unlike previous methods, which just employ one single concatenation or subtraction operation for bi-temporal feature fusion, we design a temporal feature interaction module (TFIM) to enhance interaction between bi-temporal features and capture temporal difference information at diverse feature levels. Afterward, a guided refinement modules (GRMs), which aggregates both low- and high-level temporal difference representations to polish the location information of high-level features and filter the background clutters of low-level features, is repeatedly performed. Finally, the multilevel temporal difference features are progressively fused to generate change maps for change detection. To demonstrate the effectiveness of the proposed TFI-GR, comprehensive experiments are performed on three high spatial resolution RSCD datasets. Experimental results indicate that the proposed method is superior to other state-of-the-art change detection methods. The demo code of this work is publicly available at https://github.com/guanyuezhen/TFI-GR. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Cohesion Intensive Hash Code Book Coconstruction for Efficiently Localizing Sketch Depicted Scenes.
- Author
-
Fang, Yuxin, Li, Peng, Zhang, Jie, and Ren, Peng
- Subjects
HAMMING distance ,COHESION ,IMAGE retrieval ,REMOTE sensing ,LINEAR operators - Abstract
We investigate the problem of efficiently localizing sketch depicted scenes in a remote sensing image dataset. We pose the problem as that of remote sensing image retrieval with sketch queries and explore the use of hashing techniques to achieve efficient retrieval. Given two training datasets of sketches and remote sensing images that have a common set of class labels, we develop a hashing strategy that coconstructs two hash code books for the sketches and the remote sensing images separately. The hash code book coconstruction strategy encourages hash codes for the sketches and remote sensing images from different classes to be far away from one another and those from the same class to be close. This property is maintained by two cohesion intensive cues: 1) an interclass pairwise disperse cue (InterPDC) and 2) an intraclass pairwise balance cue (IntraPBC). We use the two coconstructed hash code books for training two linear mapping models that generate hash codes for sketches and remote sensing images separately. Sorting the Hamming distance between the sketch hash codes and the remote sensing image hash codes renders efficient remote sensing image retrieval with sketch queries. This enables localizing the sketch depicted scenes in the remote sensing image dataset. In addition, our method can also be used for fast localizing sketch depicted scenes in a remote sensing image of large size. Extensive experiments on public datasets validate the effectiveness and efficiency of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Superpixel Spectral–Spatial Feature Fusion Graph Convolution Network for Hyperspectral Image Classification.
- Author
-
Gong, Zhi, Tong, Lei, Zhou, Jun, Qian, Bin, Duan, Lijuan, and Xiao, Chuangbai
- Subjects
CONVOLUTIONAL neural networks ,REMOTE sensing - Abstract
Recently, convolutional neural networks (CNNs) have demonstrated impressive capabilities in the representation and classification of hyperspectral remote sensing images. Traditional CNNs require massive data to sufficiently train the network. To tackle this problem, graph convolutional network (GCN) has been introduced for hyperspectral image classification. GCN methods usually construct the graph from either spectral or spatial domain, which has not adequately explored the information in the joint spectral–spatial domain. In this article, we propose a superpixel spectral–spatial feature fusion graph convolution network for hyperspectral image classification (S3FGCN). S3FGCN can comprehensively use information in spectral, spatial, and spectral–spatial domains with limited data. Moreover, to enhance the performance, we explore a shared weights’ GCN in the spectral–spatial domain. To further improve the efficiency, superpixels are used to construct the adjacency matrix. Finally, dynamic sampling is adopted to make the model focus more on difficult samples. In the experiments on four datasets, S3FGCN demonstrates better accuracy compared with the state-of-the-art hyperspectral image classification methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Novel Corner-Reflector Array Application in Essential Infrastructure Monitoring.
- Author
-
Kelevitz, Krisztina, Wright, Tim J, Hooper, Andrew J, and Selvakumaran, Sivasakthy
- Subjects
RAILROAD tunnels ,SYNTHETIC aperture radar ,COMMUNICATION infrastructure ,CITIES & towns ,SIGNAL-to-noise ratio - Abstract
High-precision monitoring of infrastructure using artificial reflectors is possible with freely available Sentinel-1 data, but large reflectors are needed. We find that a triangular trihedral corner reflector should typically have at least 1-m inner leg length. As such large reflectors are often not feasible for use in urban areas for essential infrastructure monitoring, we designed a multiple corner-reflector array to replace a single corner reflector with an inner leg length of 1 m. In this case, we use four reflectors where each of them is a truncated triangular trihedral with an inner leg length of 0.33 m. We measured interferometric synthetic aperture radar (InSAR) amplitude, phase, and coherence of this reflector array with various configurations of alignments of the array. We find that as long as great care is taken in the relative positioning of the four corner reflectors, so that they constructively interfere, each horizontal or vertical configuration provides the expected amplitude, coherence, and phase stability. Applications of multiple small corner reflectors in urban areas range from essential infrastructure monitoring (e.g., bridges, overpasses, and tunnel constructions), through assessment of structural health of buildings, to monitoring highway and railway embankments. We show that the multiple corner array works when placed in a single InSAR resolution cell, but depending on the application, the number and projection of corner reflectors can be varied, as long as sufficient signal-to-clutter ratio is achieved in the area of interest. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Settings for Spaceborne 3-D Scattering Tomography of Liquid-Phase Clouds by the CloudCT Mission.
- Author
-
Tzabari, Masada, Holodovsky, Vadim, Shubi, Omer, Eytan, Eshkol, Koren, Ilan, and Schechner, Yoav Y.
- Subjects
TOMOGRAPHY ,COST functions ,COMPUTED tomography ,MICROPHYSICS ,ICE clouds ,SEISMIC tomography - Abstract
We introduce a comprehensive method for space-borne 3-D volumetric scattering-tomography of cloud microphysics, developed for the CloudCT mission. The retrieved microphysical properties are the liquid-water-content (LWC) and effective droplet radius within a cloud. We include a model for a perspective polarization imager and an assumption of 3-D variation of the effective radius. Elements of our work include computed tomography initialization by a parametric horizontally uniform microphysical model. This results in smaller errors than the prior art. The mean absolute errors of the retrieved LWC and effective radius are reduced from 62% and 28% to 40% and 9%, respectively. The parameters of this initialization are determined by a grid search of a cost function. Furthermore, we add viewpoints in the cloudbow region, to better sample the polarized scattering phase function. The suggested advances are evaluated by retrieval of a set of clouds generated by large-eddy simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Clutter Reduction by Estimation of Echoes Direction of Arrival in Distributed Radar Sounders in Formation Flying.
- Author
-
Carrer, Leonardo, Thakur, Sanchari, Sericati, Luca, and Bruzzone, Lorenzo
- Subjects
CLUTTER (Radar) ,ECHO ,DIRECTION of arrival estimation ,FORMATION flying ,SHORTWAVE radio ,RADAR ,SPACE-based radar ,RADAR antennas ,DIRECTIONAL antennas - Abstract
Spaceborne radar sounders are high frequency (HF)/very high frequency (VHF) nadir-looking sensors devoted to subsurface investigations. Their data interpretation can be severely hindered by off-nadir surface clutter. Recent literature showed that the clutter suppression capabilities of this class of systems can be greatly enhanced by deploying an array of orbiting sensors in formation flight synthesizing a narrow radar antenna beam. In this article, we assess the capability of distributed radar sounding to discriminate clutter from subsurface returns by exploiting direction of arrival (DOA) estimation techniques. This is achieved by first outlining an approach for designing and evaluating the distributed radar sounder DOA estimation performance as function of the radar system parameters (e.g., intersensor distance) and external noise factors such as ionospheric scintillations. Then, the theory is complemented by radar simulations of several acquisitions over Greenland assuming a variety of subsurface geometries. The simulations confirm that clutter discrimination through DOA estimation is a viable approach to further improve the array capability in disambiguation of subsurface echoes from surface ones. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Evidence of Decreased Heterodyne-Detection Efficiency Caused by Fast Beam Scanning in Wind Sensing Coherent Doppler Lidar, and Demonstration on Recovery of the Efficiency With Lag-Angle Compensation.
- Author
-
Ito, Yusuke, Imaki, Masaharu, Sakimura, Takeshi, Yanagisawa, Takayuki, and Kameyama, Shumpei
- Subjects
DOPPLER lidar ,LASER beam measurement ,OPTICAL transmitters - Abstract
The experimental evidence of the decreased heterodyne-detection efficiency caused by the lag angle is shown using a long-range wind sensing coherent Doppler lidar (CDL) with fast beam scanning. The recovery of the efficiency with the lag-angle compensation is also demonstrated. The receiving beam alignment method synchronized with the beam scanner is used. The measurable range of 12 km with the fast beam scanning of 20°/s in the case of 8-Hz line-of-sight (LOS) update rate was demonstrated with the compensation. This demonstration shows the potential for the wind sensing CDL to satisfy all requirements of long-range, real-time, and fast beam scanning measurement. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Weakly Supervised Region of Interest Extraction Based on Uncertainty-Aware Self-Refinement Learning for Remote Sensing Images.
- Author
-
Liu, Yanan and Zhang, Libao
- Subjects
REMOTE sensing ,DISTANCE education ,GENERATIVE adversarial networks ,NETWORK performance ,MARKOV random fields - Abstract
Region of interest (ROI) extraction plays a significant role in the field of remote sensing image (RSI) processing. Recently, weakly supervised ROI extraction methods have attracted considerable attention due to low labeling cost. Most of them follow the pipeline of first generating pseudo labels and then using the pseudo labels to train a segmentation model. However, there remain problems to be solved: 1) the unbalanced distribution of foreground and background samples in the RSI dataset influences the network performance; 2) the pseudo labels mainly cover the most discriminative part of object regions that are incomplete; and 3) training with pseudo labels inevitably causes noise issues that degrade the model performance. To solve these issues, we propose a weakly supervised uncertainty-aware self-refinement learning (UASRL) method, where the initial unbalanced image-level labels are progressively refined to high-quality pixel-level annotations. In the proposed UASRL, we first present a deep generative model combined with self-attention modules to improve the unbalanced distribution in the weakly labeled dataset. Then, we design a confidence-weighted complementary erasing-based weakly supervised method to generate pseudo labels with high integrity. Finally, for training with noisy pseudo labels, we develop an uncertainty-aware joint optimization (UAJO) training strategy to reduce the negative effect caused by noisy labels and further refine pixelwise labels in a coarse-to-accurate manner, which in turn jointly promotes the model’s performance. Extensive experiments on three types of RSI datasets reveal that our proposed method is superior to other competing methods and shows a preferable tradeoff between annotation cost and detection performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Parametric Model-Based 2-D Autofocus Approach for General BiSAR Filtered Backprojection Imagery.
- Author
-
Shi, Tianyue, Mao, Xinhua, Jakobsson, Andreas, and Liu, Yanqi
- Subjects
SYNTHETIC aperture radar ,LANDSAT satellites ,AMBIGUITY - Abstract
The filtered backprojection (FBP) algorithm is viewed as a preferred candidate for general bistatic synthetic aperture radar (BiSAR) imaging since it does not pose any restrictions on SAR configurations or flight paths. However, high-efficient autofocus methods such as phase gradient autofocus (PGA) or Mapdrift (MD) cannot be effectively integrated with the FBP algorithm due to the unknown properties of the BiSAR FBP imagery spectrum. In this article, a novel Fourier-based interpretation of the BiSAR FBP algorithm is presented. Based on the new viewpoint, spectral characteristics of the BiSAR FBP imagery in the wavenumber domain, including range spectral ambiguity, space-variant spectral support, and the structural 2-D phase error, are derived in detail. Using these characteristics, a computationally efficient 2-D autofocus approach is proposed. First, a preprocessing is performed to eliminate the range spectral ambiguity and to align the skewed spectrum support, which facilitates the following phase error estimation and correction. Then, an estimation of the 1-D azimuth phase error (APE) is applied by combining multiple estimation results from different subband data. Finally, the 2-D phase error is computed directly from the estimated APE by exploiting the derived analytical structure of the 2-D phase error, which is then applied to restore the BiSAR FBP image. The simulation results are presented to show the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. PS-Net: Point Shift Network for 3-D Point Cloud Completion.
- Author
-
Zhang, Yirui, Xu, Jiabo, Zou, Yanni, Liu, Peter X., and Liu, Jie
- Subjects
POINT cloud ,REMOTE sensing ,VIDEO coding ,AUTONOMOUS vehicles ,IMAGE registration - Abstract
Point cloud completion aims to infer the complete point clouds from incomplete ones, which is used in remote sensing applications such as reconstructing and autonomous driving. However, most existing methods cannot recover accurate structure details of the object. In this article, we propose point shift network (PS-Net). Our main contributions lie in the following three-folds. First, we propose a multiresolution encoder, which extracts and fuses multiresolution point cloud features hierarchically, thus avoiding information loss caused by a single global feature. Second, we design a multiresolution point cloud generation structure, which can be combined with the multiresolution encoder to generate gradually dense point clouds, avoiding the problem of nonuniformly density of the single-layer decoder. Third, we design the shift network (SN), which is used to generate shift vectors to shift the coordinates of each point cloud, so as to further fine-tune the coordinate positions of point clouds, achieving more accurate prediction. We conduct comprehensive experiments on the ShapeNet, KITTI, ScanObjectNN, and ModelNet40 datasets, which demonstrate that the proposed PS-Net achieves better performance than the existing methods and verify the robustness of the proposed method. This article contributes a new method to point cloud completion, realizes fine point cloud shape completion, and brings new possibilities to the research of autonomous driving, registration, and reconstruction. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Self-Supervised Locality Preserving Low-Pass Graph Convolutional Embedding for Large-Scale Hyperspectral Image Clustering.
- Author
-
Ding, Yao, Zhang, Zhili, Zhao, Xiaofeng, Cai, Yaoming, Li, Siye, Deng, Biao, and Cai, Weiwei
- Subjects
SYMMETRIC matrices ,DEEP learning ,PRIOR learning ,CHARTS, diagrams, etc. ,FEATURE extraction - Abstract
Due to prior knowledge deficiency, large spectral variability, and high dimension of hyperspectral image (HSI), HSI clustering is extremally a fundamental but challenging task. Deep clustering methods have achieved remarkable success and have attracted increasing attention in unsupervised HSI classification (HSIC). However, the poor robustness, adaptability, and feature presentation limit their practical applications to complex large-scale HSI datasets. Thus, this article introduces a novel self-supervised locality preserving low-pass graph convolutional embedding method (L2GCC) for large-scale hyperspectral image clustering. Specifically, a spectral–spatial transformation HSI preprocessing mechanism is introduced to learn superpixel-level spectral–spatial features from HSI and reduce the number of graph nodes for subsequent network processing. In addition, locality preserving low-pass graph convolutional embedding autoencoder is proposed, in which the low-pass graph convolution and layerwise graph attention are designed to extract the smoother features and preserve layerwise locality features, respectively. Finally, we develop a self-training strategy, in which a self-training clustering objective employs soft labels to supervise the clustering process and obtain appropriate hidden representations for node clustering. L2GCC is an end-to-end training network, which is jointly optimized by graph reconstruction loss and self-training clustering loss. On Indian Pines, Salinas, and University of Houston 2013 datasets, the clustering accuracy overall accuracies (OAs) of the proposed L2GCC are 73.51%, 83.15%, and 64.12%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Analysis of Low-Frequency Drone-Borne GPR for Root-Zone Soil Electrical Conductivity Characterization.
- Author
-
Wu, Kaijun and Lambot, Sebastien
- Subjects
ELECTRIC conductivity ,GROUND penetrating radar ,SOILS ,DIPOLE antennas ,DIGITAL soil mapping - Abstract
In this study, we analyzed low-frequency drone-borne ground-penetrating radar (GPR) and full-wave inversion for soil electrical conductivity mapping. Indeed, in the lowest GPR frequency ranges, the soil surface reflexion coefficient depends more on the soil electrical conductivity than on its permittivity. Numerical experiments were conducted within the frequency range 15–45 MHz to analyze parameter sensitivities, the well-posedness of the inverse problem as well as the depth of sensitivity. The results show that the soil surface reflexion is significantly more sensitive to the soil electrical conductivity than the soil permittivity. Therefore, the conductivity can be retrieved using full-wave inversion within this frequency range, with a characterization depth varying from 0.5 to 1 m, depending on the soil properties. Yet, the permittivity also affects the results and should be accounted for in the inversion strategy. Field measurements were performed using low-frequency drone-borne radar with a 5-m half-wave dipole antenna, and electromagnetic induction (EMI) measurements with different depth sensitivities were conducted for comparison. Kriging interpolation was used to get maps from measurement points. The soil conductivity maps obtained by the proposed GPR and EMI are compliant in terms of absolute values and spatial patterns. This study demonstrated the capacity of low-frequency drone-borne GPR for fast, field-scale soil electrical conductivity mapping. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Anomaly Detection in Aerial Videos With Transformers.
- Author
-
Jin, Pu, Mou, Lichao, Xia, Gui-Song, and Zhu, Xiao Xiang
- Subjects
INTRUSION detection systems (Computer security) ,VIDEO codecs ,VIDEO surveillance ,STREAMING video & television ,DRONE aircraft ,INDUSTRIAL capacity ,VIDEOS - Abstract
Unmanned aerial vehicles (UAVs) are widely applied for purposes of inspection, search, and rescue operations by the virtue of low-cost, large-coverage, real-time, and high-resolution data acquisition capacities. Massive volumes of aerial videos are produced in these processes, in which normal events often account for an overwhelming proportion. It is extremely difficult to localize and extract abnormal events containing potentially valuable information from long video streams manually. Therefore, we are dedicated to developing anomaly detection methods to solve this issue. In this article, we create a new dataset, named Drone-Anomaly, for anomaly detection in aerial videos. This dataset provides 37 training video sequences and 22 testing video sequences from seven different realistic scenes with various anomalous events. There are 87488 color video frames (51635 for training and 35853 for testing) with the size of 640 $\times640$ at 30 frames/s. Based on this dataset, we evaluate existing methods and offer a benchmark for this task. Furthermore, we present a new baseline model, anomaly detection with Transformers (ANDTs), which treats consecutive video frames as a sequence of tubelets, utilizes a Transformer encoder to learn feature representations from the sequence, and leverages a decoder to predict the next frame. Our network models normality in the training phase and identifies an event with unpredictable temporal dynamics as an anomaly in the test phase. Moreover, to comprehensively evaluate the performance of our proposed method, we use not only our Drone-Anomaly dataset but also another dataset. We will make our dataset and code publicly available. A demo video is available at https://youtu.be/ancczYryOBY. We make our dataset and code publicly available (https://gitlab.lrz.de/ai4eo/reasoning/drone-anomaly https://github.com/Jin-Pu/Drone-Anomaly). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Remote Sensing Novel View Synthesis With Implicit Multiplane Representations.
- Author
-
Wu, Yongchang, Zou, Zhengxia, and Shi, Zhenwei
- Subjects
REMOTE sensing ,ARTIFICIAL neural networks ,HUMAN-computer interaction ,COMPUTER graphics ,DIGITAL photogrammetry - Abstract
Novel view synthesis of remote sensing (RS) scenes is of great significance for scene visualization, human–computer interaction, and various downstream applications. Despite the recent advances in computer graphics and photogrammetry technology, generating novel views is still challenging particularly for RS images due to its high complexity, view sparsity, and limited view-perspective variations. In this article, we propose a novel RS view synthesis method by leveraging the recent advances in implicit neural representations. Considering the overhead and far depth imaging of RS images, we represent the 3-D space by combining implicit multiplane images (ImMPI) representation and deep neural networks. The 3-D scene is reconstructed under a self-supervised optimization paradigm through a differentiable multiplane renderer with multiview input constraints. Images from any novel views thus can be freely rendered on the basis of the reconstructed model. As a by-product, the depth maps corresponding to the given viewpoint can be generated along with the rendering output. We refer to our method as ImMPIs. To further improve the view synthesis under sparse-view inputs, we explore the learning-based initialization of RS 3-D scenes and proposed a neural-network-based prior extractor to accelerate the optimization process. In addition, we propose a new dataset for RS novel view synthesis with multiview real-world Google Earth images. Extensive experiments demonstrate the superiority of the ImMPI over previous state-of-the-art methods in terms of reconstruction accuracy, visual fidelity, and time efficiency. Ablation experiments also suggest the effectiveness of our methodology design. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Environment Monitoring of Shanghai Nanhui Intertidal Zone With Dual-Polarimetric SAR Data Based on Deep Learning.
- Author
-
Liu, Guangyang, Liu, Bin, Zheng, Gang, and Li, Xiaofeng
- Subjects
DEEP learning ,INTERTIDAL zonation ,ARTIFICIAL neural networks ,SYNTHETIC aperture radar ,CONVOLUTIONAL neural networks ,ZONING - Abstract
Satellite-based synthetic aperture radar (SAR) can provide low-cost, frequent environment monitoring for dynamic intertidal zones. The critical problem is to realize pixel-level classification of SAR images of the intertidal zones with excellent and robust performance. Recently, deep learning, in particular deep convolutional neural networks, has provided us with promising solutions to this problem. Based on a sophisticated deep learning-based pixel-level classification model $\text{U}^{2}$ -Net, we propose an MB- $\text{U}^{2}$ -ACNet model suitable for intertidal zone land cover classification using dual-polarimetric SAR data integrated with environmental information, such as wind speed and tide level information. The MB- $\text{U}^{2}$ -ACNet model has a multibranch nested U-shaped encoding–decoding structure. We extract and fuse features from multiple data sources, including satellite remote sensing and environmental information, by establishing the multibranch structure. Furthermore, we propose an asymmetric convolution residual U-block for each encoding–decoding stage to improve the model’s feature extraction ability. Moreover, the model with attention mechanisms better distinguishes the importance of features from the channel’s perspectives and spatial dimensions. We construct a dataset with 106 Sentinel-1 SAR images from 2016 to 2020 for environmental monitoring in the intertidal zone of Shanghai Nanhui. On the dataset, the proposed model reaches the overall classification accuracy of 96.40% and the mean intersection over union score of 0.8307. The experiments show the advantages of the proposed model compared with the benchmarking models due to better feature extraction and multisource information fusion. In addition, the contributions of every added substructure are analyzed systematically. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Semisupervised Hyperspectral Image Classification Using a Probabilistic Pseudo-Label Generation Framework.
- Author
-
Seydgar, Majid, Rahnamayan, Shahryar, Ghamisi, Pedram, and Bidgoli, Azam Asilian
- Subjects
ARTIFICIAL neural networks ,GENERATIVE adversarial networks ,DEEP learning ,GAUSSIAN distribution ,BINARY codes ,BUDGET ,PROBABILISTIC number theory - Abstract
Deep neural networks (DNNs) show impressive performance for hyperspectral image (HSI) classification when abundant labeled samples are available. The problem is that HSI sample annotation is extremely costly and the budget for this task is usually limited. To reduce the reliance on labeled samples, deep semisupervised learning (SSL), which jointly learns from labeled and unlabeled samples, has been introduced in the literature. However, learning robust and discriminative features from unlabeled data is a challenging task due to various noise effects and ambiguity of unlabeled samples. As a result, recent advances are constrained, mainly in the pretraining or warm-up stage. In this article, we propose a deep probabilistic framework to generate reliable pseudo-labels to explicitly learn discriminative features from unlabeled samples. The generated pseudo-labels of our proposed framework can be fed to various DNNs to improve their generalization capacity. Our proposed framework takes only ten labeled samples per class to represent the label set as an uncertainty-aware distribution (We use the Gaussian distribution to represent the uncertainty of the label set in the latent space.) in the latent space. The pseudo-labels are then generated for those unlabeled samples whose feature values match the distribution with high probability. By performing extensive experiments on four publicly available datasets, we show that our framework can generate reliable pseudo-labels to significantly improve the generalization capacity of several state-of-the-art DNNs. In addition, we introduce a new DNN for HSI classification that demonstrates outstanding accuracy results in comparison with its rivals. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Analysis of Swarm Satellite Magnetic Field Data for the 2015 Mw 7.8 Nepal Earthquake Based on Nonnegative Tensor Decomposition.
- Author
-
Fan, Mengxuan, Zhu, Kaiguang, De Santis, Angelo, Marchetti, Dedalo, Cianchini, Gianfranco, Piscini, Alessandro, He, Xiaodan, Wen, Jiami, Wang, Ting, Zhang, Yiqun, and Cheng, Yuqi
- Subjects
NEPAL Earthquake, 2015 ,MAGNETIC fields ,EARTHQUAKES ,SOLAR activity - Abstract
A nonnegative tensor decomposition (NTD) approach has been developed to analyze the ionospheric magnetic field data of the Swarm Alpha and Charlie satellites for the Mw7.8 2015 Nepal earthquake. All available satellite data were analyzed regardless of geomagnetic activity. We used the amplitude time–frequency spectra of the two-satellite data to build third-order tensors and decomposed them into three components. One of these components seems to be more affected by seismicity. In particular, the cumulative number of anomalous tracks of this component displays accelerated growth that conforms to a sigmoid fit from 60 to 40 days before the mainshock. Subsequently, until ten days before the earthquake, it shows a weak accelerating trend that obeys a power-law behavior and then resumes linear growth after the mainshock. Moreover, the cumulative anomaly was indicated not to be caused by geomagnetic activity, solar activity, or other nonseismic factors. An investigation of the foreshocks around the epicenter reveals that the cumulative Benioff strain also exhibited two accelerated growths before the mainshock, which is consistent with the cumulative result of ionospheric anomalies. In the first acceleration stage, seismicity appeared in the region surrounding the epicenter, and most of the ionospheric anomalies were offset away from the epicenter. During the second acceleration stage, some foreshocks occurred closer to or on the mainshock fault, and ionospheric anomalies also appeared near two faults around the epicenter. Furthermore, the correspondence between the ionospheric anomalies and the anomalies in different geolayers can be explained by the lithosphere–atmosphere–ionosphere coupling model. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. MD Loss: Efficient Training of 3-D Seismic Fault Segmentation Network Under Sparse Labels by Weakening Anomaly Annotation.
- Author
-
Dou, Yimin, Li, Kewen, Zhu, Jianbing, Li, Timing, Tan, Shaoquan, and Huang, Zongchao
- Subjects
THREE-dimensional imaging ,IMAGE segmentation ,PETRI nets ,ANNOTATIONS - Abstract
Data-driven fault detection has been regarded as a 3-D image segmentation task. The models trained from synthetic data are difficult to generalize in some surveys. Recently, training 3-D fault segmentation using sparse manual 2-D slices is thought to yield promising results, but manual labeling has many false negative labels (FNLs) (abnormal annotations), which is detrimental to training and consequently to detection performance. Motivated to train 3-D fault segmentation networks under sparse 2-D labels while suppressing FNLs, we analyze the training process gradient and propose the mask dice (MD) loss. Moreover, the fault is an edge feature, and current encoder–decoder architectures widely used for fault detection (e.g., U-shape network) are not conducive to edge representation. Consequently, fault-net is proposed, which is designed for the characteristics of faults, employs high-resolution propagation features, and embeds multiscale compression fusion block to fuse multiscale information, which allows the edge information to be fully preserved during propagation and fusion, thus enabling advanced performance via few computational resources. The experiment demonstrates that MD loss supports the inclusion of human experience in training and suppresses FNLs therein, enabling baseline models to improve performance and generalize to more surveys. Fault-Net is capable of providing a more stable and reliable interpretation of faults, and it uses extremely low computational resources and inference is significantly faster than other models. Our method indicates optimal performance in comparison with several mainstream methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. A Multi-Task Framework for Infrared Small Target Detection and Segmentation.
- Author
-
Chen, Yuhang, Li, Liyuan, Liu, Xin, and Su, Xiaofeng
- Subjects
COMPUTER vision ,INFRARED imaging ,VISUAL fields ,FEATURE extraction ,MARKOV random fields ,IMAGE segmentation ,TEMPORAL lobe - Abstract
Due to the complicated background and noise of infrared images, infrared small target detection is one of the most difficult problems in the field of computer vision. In most existing studies, semantic segmentation methods are typically used to achieve better results. The centroid of each target is calculated from the segmentation map as the detection result. In contrast, we propose a novel end-to-end framework for infrared small target detection and segmentation in this article. First, with the use of UNet as the backbone to maintain resolution and semantic information, our model can achieve a higher detection accuracy than other state-of-the-art methods by attaching a simple anchor-free head. Then, a pyramid pool module is used to further extract features and improve the precision of target segmentation. Next, we use semantic segmentation tasks that pay more attention to pixel-level features to assist in the training process of object detection, which increases the average precision (AP) and allows the model to detect some targets that were previously not detectable. Furthermore, we develop a multi-task framework for infrared small target detection and segmentation. Our multi-task learning model reduces complexity by nearly half and speeds up inference by nearly twice compared to the composite single-task model while maintaining accuracy. The code and models are publicly available at https://github.com/Chenastron/MTUNet. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Change Captioning: A New Paradigm for Multitemporal Remote Sensing Image Analysis.
- Author
-
Hoxha, Genc, Chouaf, Seloua, Melgani, Farid, and Smara, Youcef
- Subjects
REMOTE sensing ,RECURRENT neural networks ,IMAGE analysis ,CONVOLUTIONAL neural networks ,SUPPORT vector machines ,MULTISPECTRAL imaging - Abstract
Change detection (CD) is among the most important applications in remote sensing (RS) that allows identifying the changes that occurred in a given geographical area across different times. Even though CD systems have seen a lot of progress in RS, their output is either a binary map highlighting the changing area or a semantic change map that indicates the type of change for each pixel. The change maps are often difficult to interpret by end users, and they omit important information such as relationships and attributes of the changed areas. Motivated by the recent advancement of image captioning in the RS community, in this article, we propose to describe the changes over bitemporal images through change sentence descriptions. The aim of this article is to provide a user-friendly interpretation of the occurred changes. To this end, we propose two change captioning (CC) systems that take bitemporal images as input and generate coherent sentence descriptions of the occurred changes. Convolutional neural networks (CNNs) are used to extract discriminative features from the bitemporal images and recurrent neural networks (RNNs) or support vector machines (SVMs) are exploited to generate coherent change descriptions. Furthermore, in the absence of a CC dataset to test our systems, we propose two new datasets. One is based on very high-resolution RGB images, and the other one is based on multispectral RS images. The obtained experimental results show promising capabilities of the proposed systems to generate coherent change descriptions from the bitemporal images. The datasets are available at the following link: https://disi.unitn.it/~melgani/datasets.html. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Nonnegative-Constrained Joint Collaborative Representation With Union Dictionary for Hyperspectral Anomaly Detection.
- Author
-
Chang, Shizhen and Ghamisi, Pedram
- Subjects
ANOMALY detection (Computer security) ,LABOR union recognition ,IMAGE converters ,IMAGE representation ,KERNEL functions - Abstract
Recently, many collaborative representation (CR)-based algorithms have been proposed for hyperspectral anomaly detection (AD). CR-based detectors approximate the image by a linear combination of background dictionaries and the coefficient matrix and derive the detection map by utilizing recovery residuals. However, these CR-based detectors are often established on the premise of precise background features and strong image representation, which are very difficult to obtain. In addition, pursuing the coefficient matrix reinforced by the general $l_{2}$ -min is very time-consuming. To address these issues, a nonnegative-constrained joint collaborative representation (NJCR) model is proposed in this article for the hyperspectral AD task. To extract reliable samples, a union dictionary consisting of background and anomaly subdictionaries is designed, where the background subdictionary is obtained at the superpixel level and the anomaly subdictionary is extracted by the predetection process. And the coefficient matrix is jointly optimized by the Frobenius norm regularization with a nonnegative constraint and a sum-to-one constraint. After the optimization process, the abnormal information is finally derived by calculating the residuals that exclude the assumed background information. To conduct comparable experiments, the proposed nonnegative-constrained joint collaborative representation (NJCR) model and its kernel version (KNJCR) are tested in four hyperspectral images (HSIs) datasets and achieve superior results compared with other state-of-the-art detectors. The codes of the proposed method will be available online (https://github.com/ShizhenChang/NJCR). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Vehicle Trace Detection in Two-Pass SAR Coherent Change Detection Images With Spatial Feature Enhanced Unet and Adaptive Augmentation.
- Author
-
Zhang, Jinsong, Xing, Mengdao, Sun, Guang-Cai, and Shi, Xin
- Subjects
SYNTHETIC aperture radar ,CONVOLUTIONAL neural networks ,DATA augmentation ,REMOTE sensing ,CHARGE coupled devices - Abstract
As a typical application of remote sensing technology, change detection can find the ground information changes by acquiring the images of the same region at different times. The change detection using the synthetic aperture radar (SAR) with the advantages of all day and all-weather usually monitors the significant surface change, such as flood disasters and earthquake deformation. However, when it comes to detecting subtle changes such as vehicle traces, the traditional methods ignoring the phase coherence between image pairs cannot intensify these faint changes in the difference image. The SAR coherent change detection (CCD) based on repeat-pass repeat-geometry complex images utilizing both the intensity and phase fraction could exhibit the subtle vehicle trace in the difference image. However, the complicated background and decorrelation factors significantly affect the quality of difference images, further causing great trouble for automatic trace detection. This article proposes the spatial feature enhanced Unet and adaptive data augmentation to realize vehicle trace detection. More specifically, the pseudocolor image is first synthesized based on a two-stage coherence estimation method. Then, considering the long-continuity and parallel distribution of vehicle trace samples, the enhanced Unet is constructed by fusing spatial convolutional neural network and spatial attention mechanism. After that, the adaptation data augmentation strategy is presented by introducing manual registration errors and multiple estimation windows. Finally, the experimental results on the Sandia CCD data and our measured data demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Attention-Based Fully Convolutional DenseNet for Earthquake Detection.
- Author
-
Elsayed, Hagar S., Saad, Omar M., Soliman, M. Sami, Chen, Yangkang, and Youness, Hassan A.
- Subjects
SEISMOGRAMS ,DEEP learning ,DETECTION alarms ,FALSE alarms ,EARTHQUAKES - Abstract
We propose a novel deep learning method using an attention-based fully convolutional dense network (FCDNet) for automatic earthquake detection. The FCDNet consists of encoder–decoder parts with skip connections, where each encoder–decoder block contains a block of densely connected layers to enhance the feature learning capability. The spatial attention mechanism is added within the FCDNet to assign greater attention to useful features and hence improve the accuracy of earthquake detection. The time–frequency representations of three-component seismograms produced by the Stockwell transform are used for better extracting the hidden data features. The attention-based FCDNet extracts the time–frequency features needed for distinguishing the seismic signal from the background noise. We evaluate the performance of the proposed method using a Mediterranean dataset. The attention-based FCDNet is trained using 90% of the Mediterranean dataset and tested using the remaining 10%. Accordingly, the training and testing accuracies are 97.71% and 97.02%, respectively. The intersection over union (IoU), precision, recall, and F1 score of the attention-based FCDNet are 93.80%, 99.72%, 99.55%, and 99.64%, respectively. Moreover, to evaluate the generalization ability of the trained model, we utilize 100000 seismic waveforms recorded in different seismic regions from the global STanford EArthquake Dataset (STEAD) dataset for testing, which shows robust performance. We also apply the attention-based FCDNet to the Japanese seismic data and compare the performance to the CRED and SCALODEEP methods. The attention-based FCDNet outperforms the benchmark methods and achieves a higher detection accuracy of 99.46%. The attention-based FCDNet is additionally evaluated using one-day continuous seismic data recording a seismic swarm that occurred in the Helike region. As a result, the attention-based FCDNet recognizes 135 earthquakes and raises 15 false alarms with a detection accuracy of 90.06%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. MSLM-RF: A Spatial Feature Enhanced Random Forest for On-Board Hyperspectral Image Classification.
- Author
-
Yuan, Shuai, Sun, Yanan, He, Weifeng, Gu, Qianrong, Xu, Shi, Mao, Zhigang, and Tu, Shikui
- Subjects
RANDOM forest algorithms ,FEATURE extraction ,CLASSIFICATION algorithms ,ENERGY consumption ,SPATIAL filters ,EXPONENTIATION ,COMPUTATIONAL complexity - Abstract
Hyperspectral imaging (HSI) greatly improves the capacity to identify and monitor ground objects due to the high spectral resolution. As the real-time remote sensing monitoring and warning tasks are getting more attention, new algorithms for low-power on-board classification are required to reduce the transmission time of satellite downlink. In this article, we propose the multiscale local maximum random forest (MSLM-RF) to significantly reduce energy consumption while retaining high classification accuracy. The proposed MSLM-RF uses multiscale maximum filters for spatial feature extraction and random forest for classification after spectral and spatial features fusion. The spatial features are efficiently extracted with low computational complexity by regarding the maximum light intensity values in different ranges of pixels as anchor points. MSLM-RF only consists of integer comparisons and a few additions, thereby eliminating the energy-hungry operations such as multiplication and exponentiation. According to experimental results on the HSI benchmark datasets, MSLM-RF delivers a better tradeoff in accuracy and computational complexity than the state-of-the-art classification algorithms. Besides, MSLM-RF gets higher average classification accuracy and lower energy consumption than the previous on-board algorithms. The obtained results show the suitability of the proposed algorithm to accomplish practical real-time classification tasks on-board with low energy consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.