48 results on '"Zongxu Pan"'
Search Results
2. A Sidelobe-Aware Small Ship Detection Network for Synthetic Aperture Radar Imagery
- Author
-
Yongsheng Zhou, Hanchao Liu, Fei Ma, Zongxu Pan, and Fan Zhang
- Subjects
General Earth and Planetary Sciences ,Electrical and Electronic Engineering - Published
- 2023
- Full Text
- View/download PDF
3. SiamMDM: An Adaptive Fusion Network With Dynamic Template for Real-Time Satellite Video Single Object Tracking
- Author
-
Jianwei Yang, Zongxu Pan, Ziming Wang, Bin Lei, and Yuxin Hu
- Subjects
General Earth and Planetary Sciences ,Electrical and Electronic Engineering - Published
- 2023
- Full Text
- View/download PDF
4. APAFNet: Single-Frame Infrared Small Target Detection by Asymmetric Patch Attention Fusion
- Author
-
Ziming Wang, Jianwei Yang, Zongxu Pan, Yuhan Liu, Bin Lei, and Yuxin Hu
- Subjects
Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology - Published
- 2023
- Full Text
- View/download PDF
5. FSANet: Feature-and-Spatial-Aligned Network for Tiny Object Detection in Remote Sensing Images
- Author
-
Jixiang Wu, Zongxu Pan, Bin Lei, and Yuxin Hu
- Subjects
General Earth and Planetary Sciences ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
6. SAR Interference Suppression Algorithm Based on Low-Rank and Sparse Matrix Decomposition in Time–Frequency Domain
- Author
-
Lyu Qiyuan, Yuxin Hu, Bing Han, Zongxu Pan, Wei Sun, Wen Hong, and Guangzuo Li
- Subjects
Synthetic aperture radar ,Matrix (mathematics) ,Interference (communication) ,Noise (signal processing) ,Computer science ,Random projection ,Time domain ,Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology ,Algorithm ,Electromagnetic interference ,Computer Science::Information Theory ,Sparse matrix - Abstract
Radio frequency electromagnetic interference is a relatively common phenomenon, especially for synthetic aperture radar (SAR) systems working in P- or L-band. Compared with the suppression of narrowband interference, that of wideband interference, particularly of those whose signal parameters have frequently changing property, is still a sophisticated problem. In this letter, a suppression algorithm for interference with wideband and complicated parameters is proposed, based on low-rank and sparse matrix decomposition (LRSMD) in time-frequency domain (TFD) of the signal. The proposed algorithm begins with transforming the SAR signal into TFD. After that, LRSMD based on bilateral random projection (BRP) is applied to decompose the time-frequency spectrum matrix into three parts, a low-rank matrix standing for interference, a sparse matrix standing for SAR signal, and a noise matrix. Finally, inversely transform the sparse matrix into the time domain to obtain SAR signal without interference. The proposed algorithm is applied to a single look complex (SLC) SAR data of Sentinel-1 to validate its effect and efficiency.
- Published
- 2022
- Full Text
- View/download PDF
7. Learning Time–Frequency Information With Prior for SAR Radio Frequency Interference Suppression
- Author
-
Jiayuan Shen, Bing Han, Zongxu Pan, Guangzuo Li, Yuxin Hu, and Chibiao Ding
- Subjects
General Earth and Planetary Sciences ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
8. Multi-Representation Dynamic Adaptation Network for Remote Sensing Scene Classification
- Author
-
Ben Niu, Zongxu Pan, Jixiang Wu, Yuxin Hu, and Bin Lei
- Subjects
General Earth and Planetary Sciences ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
9. MAGE: Multisource Attention Network With Discriminative Graph and Informative Entities for Classification of Hyperspectral and LiDAR Data
- Author
-
Di Xiu, Zongxu Pan, Yirong Wu, and Yuxin Hu
- Subjects
General Earth and Planetary Sciences ,Electrical and Electronic Engineering - Published
- 2022
- Full Text
- View/download PDF
10. Spatiotemporal Data Fusion and CNN Based Ship Tracking Method for Sequential Optical Remote Sensing Images From the Geostationary Satellite
- Author
-
Qiantong Wang, Yuxin Hu, Zongxu Pan, Fangjian Liu, and Bing Han
- Subjects
Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology - Published
- 2022
- Full Text
- View/download PDF
11. An Effective Network Integrating Residual Learning and Channel Attention Mechanism for Thin Cloud Removal
- Author
-
Xue Wen, Zongxu Pan, Yuxin Hu, and Jiayin Liu
- Subjects
Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology - Published
- 2022
- Full Text
- View/download PDF
12. Exploring PolSAR Images Representation via Self-Supervised Learning and Its Application on Few-Shot Classification
- Author
-
Wu Zhang, Zongxu Pan, and Yuxin Hu
- Subjects
Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology - Published
- 2022
- Full Text
- View/download PDF
13. Learning From Reliable Unlabeled Samples for Semi-Supervised SAR ATR
- Author
-
Keyang Chen, Zongxu Pan, Zhongling Huang, Yuxin Hu, and Chibiao Ding
- Subjects
Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology - Published
- 2022
- Full Text
- View/download PDF
14. PistonNet: Object Separating From Background by Attention for Weakly Supervised Ship Detection
- Author
-
Yi Yang, Zongxu Pan, Yuxin Hu, and Chibiao Ding
- Subjects
Atmospheric Science ,Computers in Earth Sciences - Published
- 2022
- Full Text
- View/download PDF
15. SemanticAnchors: Sequential Fusion using Lidar Point Cloud and Anchors with Semantic Annotations for 3D Object Detection
- Author
-
Zhentong Gao, Qiantong Wang, Zongxu Pan, Hui Long, Yuxin Hu, and Zheng Li
- Published
- 2022
- Full Text
- View/download PDF
16. 3D Object Detection Based on Feature Fusion of Point Cloud Sequences
- Author
-
Zhenyu Zhai, Qiantong Wang, Zongxu Pan, Wenlong Hu, and Yuxin Hu
- Published
- 2022
- Full Text
- View/download PDF
17. Muti-Frame Point Cloud Feature Fusion Based on Attention Mechanisms for 3D Object Detection
- Author
-
Zhenyu Zhai, Qiantong Wang, Zongxu Pan, Zhentong Gao, and Wenlong Hu
- Subjects
autonomous driving ,3D object detection ,point cloud sequences ,attention mechanism ,feature fusion ,Electrical and Electronic Engineering ,Biochemistry ,Instrumentation ,Atomic and Molecular Physics, and Optics ,Analytical Chemistry - Abstract
Continuous frames of point-cloud-based object detection is a new research direction. Currently, most research studies fuse multi-frame point clouds using concatenation-based methods. The method aligns different frames by using information on GPS, IMU, etc. However, this fusion method can only align static objects and not moving objects. In this paper, we proposed a non-local-based multi-scale feature fusion method, which can handle both moving and static objects without GPS- and IMU-based registrations. Considering that non-local methods are resource-consuming, we proposed a novel simplified non-local block based on the sparsity of the point cloud. By filtering out empty units, memory consumption decreased by 99.93%. In addition, triple attention is adopted to enhance the key information on the object and suppresses background noise, further benefiting non-local-based feature fusion methods. Finally, we verify the method based on PointPillars and CenterPoint. Experimental results show that the mAP of the proposed method improved by 3.9% and 4.1% in mAP compared with concatenation-based fusion modules, PointPillars-2 and CenterPoint-2, respectively. In addition, the proposed network outperforms powerful 3D-VID by 1.2% in mAP.
- Published
- 2022
18. Radio Frequency Interference Suppression in SAR System Using Prior-Induced Deep Neural Network
- Author
-
Jiayuan Shen, Bing Han, Zongxu Pan, Yuxin Hu, Wen Hong, and Chibiao Ding
- Published
- 2022
- Full Text
- View/download PDF
19. HDEC-TFA: An Unsupervised Learning Approach for Discovering Physical Scattering Properties of Single-Polarized SAR Image
- Author
-
Bin Lei, Xiaolan Qiu, Zhongling Huang, Zongxu Pan, and Mihai Datcu
- Subjects
Synthetic aperture radar ,Backscatter ,Computer science ,0211 other engineering and technologies ,Polarimetry ,02 engineering and technology ,Scattering ,Azimuth ,Machine learning ,Electrical and Electronic Engineering ,Cluster analysis ,Physics::Atmospheric and Oceanic Physics ,021101 geological & geomatics engineering ,business.industry ,Perspective (graphical) ,Pattern recognition ,Time–frequency analysis ,Time-frequency analysis ,Aerospace engineering ,General Earth and Planetary Sciences ,Unsupervised learning ,Artificial intelligence ,business - Abstract
Understanding the physical properties and scattering mechanisms contributes to synthetic aperture radar (SAR) image interpretation. For single-polarized SAR data, however, it is difficult to extract the physical scattering mechanisms due to lack of polarimetric information. Time–frequency analysis (TFA) on complex-valued SAR image provides extra information in frequency perspective beyond the “image” domain. Based on TFA theory, we propose to generate the subband scattering pattern for every object in complex-valued SAR image as the physical property representation, which reveals backscattering variations along slant-range and azimuth directions. In order to discover the inherent patterns and generate a scattering classification map from single-polarized SAR image, an unsupervised hierarchical deep embedding clustering (HDEC) algorithm based on TFA (HDEC-TFA) is proposed to learn the embedded features and cluster centers simultaneously and hierarchically. The polarimetric analysis result for quad-pol SAR images is applied as reference data of physical scattering mechanisms. In order to compare the scattering classification map obtained from single-polarized SAR data with the physical scattering mechanism result from full-polarized SAR, and to explore the relationship and similarity between them in a quantitative way, an information theory based evaluation method is proposed. We take Gaofen-3 quad-polarized SAR data for experiments, and the results and discussions demonstrate that the proposed method is able to learn valuable scattering properties from single-polarization complex-valued SAR data, and to extract some specific targets as well as polarimetric analysis. At last, we give a promising prospect to future applications.
- Published
- 2021
- Full Text
- View/download PDF
20. Deep SAR-Net: Learning objects from signals
- Author
-
Bin Lei, Zongxu Pan, Mihai Datcu, and Zhongling Huang
- Subjects
Synthetic aperture radar ,010504 meteorology & atmospheric sciences ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,02 engineering and technology ,Texture (music) ,01 natural sciences ,Convolutional neural network ,Discriminative model ,Computers in Earth Sciences ,Engineering (miscellaneous) ,Deep convolutional neural networkComplex-valued SAR imagesTransfer learningTime-frequency analysisPhysical properties ,Physics::Atmospheric and Oceanic Physics ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,business.industry ,Deep learning ,Pattern recognition ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,Time–frequency analysis ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science::Graphics ,Satellite ,Artificial intelligence ,Transfer of learning ,business - Abstract
This paper introduces a novel Synthetic Aperture Radar (SAR) specific deep learning framework for complex-valued SAR images. The conventional deep convolutional neural networks based methods usually take the amplitude information of single-polarization SAR images as the input to learn hierarchical spatial features automatically, which may have difficulties in discriminating objects with similar texture but discriminative scattering patterns. Our novel deep learning framework, Deep SAR-Net, takes complex-valued SAR images into consideration to learn both spatial texture information and backscattering patterns of objects on the ground. On the one hand, we transfer the detected SAR images pre-trained layers to extract spatial features from intensity images. On the other hand, we dig into the Fourier domain to learn physical properties of the objects by joint time-frequency analysis on complex-valued SAR images. We evaluate the effectiveness of Deep SAR-Net on three complex-valued SAR datasets from Sentinel-1 and TerraSAR-X satellite and demonstrate how it works better than conventional deep CNNs, especially on man-made objects classes. The proposed datasets and the trained Deep SAR-Net model with all codes are provided.
- Published
- 2020
- Full Text
- View/download PDF
21. Learning Capsules for SAR Target Recognition
- Author
-
Ji Wang, Yunrui Guo, Wenjing Yang, Meiming Wang, and Zongxu Pan
- Subjects
Synthetic aperture radar ,Atmospheric Science ,Network complexity ,Computer science ,Geophysics. Cosmic physics ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,02 engineering and technology ,Convolutional neural network ,Automatic target recognition ,0203 mechanical engineering ,Robustness (computer science) ,Computers in Earth Sciences ,synthetic aperture radar (SAR) target recognition ,skin and connective tissue diseases ,TC1501-1800 ,021101 geological & geomatics engineering ,convolutional neural network (CNN) ,020301 aerospace & aeronautics ,Artificial neural network ,QC801-809 ,business.industry ,Deep learning ,fungi ,deep learning ,Pattern recognition ,Ocean engineering ,body regions ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,Capsule network ,business - Abstract
Deep learning has been successfully utilized in synthetic aperture radar (SAR) automatic target recognition tasks and obtained state-of-the-art results. However, current deep learning algorithms do not perform well when SAR images are occluded, noisy, or with a great depression angle variance. This article proposes a novel method, SAR capsule network, to achieve the accurate and robust classification of SAR images without significantly increasing network complexity. Specifically, we develop a convolutional neural network extension based on Hinton's capsule network to capture spatial relationships specialized in classification between different entities in a SAR image. The SAR capsules are learned by a vector-based full connection operation instead of the traditional routing process, which not only alleviates the computational burden but also improves recognition accuracy. For occlusion, additive noise, and multiplicative noise tests, SAR capsule network shows superior robustness compared with typical convolution neural networks. When missing training data in a certain aspect angle range or existing a large depression angle variance between training data and test data, the proposed network achieves better performance than the existing works and reveals some competitive advantages in several test scenarios.
- Published
- 2020
- Full Text
- View/download PDF
22. Method of Infrared Small Moving Target Detection Based on Coarse-to-Fine Structure in Complex Scenes
- Author
-
Yapeng Ma, Yuhan Liu, Zongxu Pan, and Yuxin Hu
- Subjects
General Earth and Planetary Sciences ,infrared small moving target detection ,spatiotemporal information ,weighting module ,local contrast measure - Abstract
In the combat system, infrared target detection is an important issue worthy of study. However, due to the small size of the target in the infrared image, the low signal-to-noise ratio of the image and the uncertainty of motion, how to detect the target accurately and quickly is still difficult. Therefore, in this paper, an infrared method of detecting small moving targets based on a coarse-to-fine structure (MCFS) is proposed. The algorithm mainly consists of three modules. The potential target extraction module first smoothes the image through a Laplacian filter and extracts the prior weight of the image by the proposed weighted harmonic method to enhance the target and suppress the background. Then, the local variance feature map and local contrast feature map of the image are calculated through a multiscale three-layer window to obtain the potential target region. Next, a new robust region intensity level (RRIL) algorithm is proposed in the spatial-domain weighting module. Finally, the temporal-domain weighting module is established to enhance the target positions by analyzing the kurtosis features of temporal signals. Experiments are conducted on real infrared datasets. Through scientific analysis, the proposed method can successfully detect the target, at the same time, the ability to suppress the background and the ability to improve the target has reached the maximum, which verifies the effectiveness of the algorithm.
- Published
- 2023
- Full Text
- View/download PDF
23. D-MFPN: A Doppler Feature Matrix Fused with a Multilayer Feature Pyramid Network for SAR Ship Detection
- Author
-
Yucheng Zhou, Kun Fu, Bing Han, Junxin Yang, Zongxu Pan, Yuxin Hu, and Di Yin
- Subjects
ship detection ,General Earth and Planetary Sciences ,synthetic aperture radar (SAR) - Abstract
Ship detection from synthetic aperture radar (SAR) images has become a major research field in recent years. It plays a major role in monitoring the ocean, marine rescue activities, and marine safety warnings. However, there are still some factors that restrict further improvements in detecting performance, e.g., multi-scale ship transformation and unfocused images caused by motion. In order to resolve these issues, in this paper, a doppler feature matrix fused with a multi-layer feature pyramid network (D-MFPN) is proposed for SAR ship detection. The D-MFPN takes single-look complex image data as input and consists of two branches: the image branch designs a multi-layer feature pyramid network to enhance the positioning capacity for large ships combined with an attention module to refine the feature map’s expressiveness, and the doppler branch aims to build a feature matrix that characterizes the ship’s motion state by estimating the doppler center frequency and frequency modulation rate offset. To confirm the validity of each branch, individual ablation experiments are conducted. The experimental results on the Gaofen-3 satellite ship dataset illustrate the D-MFPN’s optimal performance in defocused ship detection tasks compared with six other competitive convolutional neural network (CNN)-based SAR ship detectors. Its satisfactory results demonstrate the application value of the deep-learning model fused with doppler features in the field of SAR ship detection.
- Published
- 2023
- Full Text
- View/download PDF
24. Open Set Domain Adaptation via Instance Affinity Metric and Fine-grained Alignment for Remote Sensing Scene Classification
- Author
-
Ben Niu, Zongxu Pan, Keyang Chen, Yuxin Hu, and Bin Lei
- Subjects
Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology - Published
- 2023
- Full Text
- View/download PDF
25. Cloudformer: A Cloud-Removal Network Combining Self-Attention Mechanism and Convolution
- Author
-
Peiyang Wu, Zongxu Pan, Hairong Tang, and Yuxin Hu
- Subjects
cloud removal ,transformer ,self-attention ,convolution ,General Earth and Planetary Sciences - Abstract
Optical remote-sensing images have a wide range of applications, but they are often obscured by clouds, which affects subsequent analysis. Therefore, cloud removal becomes a necessary preprocessing step. In this paper, a novel and superior transformer-based network is proposed, named Cloudformer. The proposed method novelly combines the advantages of convolution and a self-attention mechanism: it uses convolution layers to extract simple features over a small range in the shallow layer, and exerts the advantage of a self-attention mechanism in extracting correlation in a large range in the deep layer. This method also introduces Locally-enhanced Positional Encoding (LePE) to flexibly generate suitable positional encodings for different inputs and to utilize local information to enhance encoding capabilities. Exhaustive experiments on public datasets demonstrate the superior ability of the method to remove both thin and thick clouds, and the effectiveness of the proposed modules is validated by ablation studies.
- Published
- 2022
- Full Text
- View/download PDF
26. Progress of deep learning-based target recognition in radar images
- Author
-
Zongxu Pan, Bingchen Zhang, and Quanzhi An
- Subjects
Earth observation ,General Computer Science ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Convolutional neural network ,Task (project management) ,Radar imaging ,Metric (mathematics) ,Computer vision ,Artificial intelligence ,Transfer of learning ,business ,Engineering (miscellaneous) - Abstract
Radar detection is an effective earth observation means, and target recognition in radar images is its important research direction. Deep learning has been successfully applied to many fields but training deep neural networks requires a mass of data. The lack of samples has become the major factor that impedes the application of deep learning approaches to target recognition in radar images. This paper reviews the research progress of deep learning based target recognition in radar images, with representative methods being combed and summarized. First, data augmentation and neural network models designed for the task of radar image target recognition are introduced. The paper then presents in detail the target recognition methods based on transfer learning, metric learning, and semi-supervised learning in radar images with few samples, which are proposed by our research group. Finally, existing problems and future development trends are discussed.
- Published
- 2019
- Full Text
- View/download PDF
27. Super-Resolution of Single Remote Sensing Image Based on Residual Dense Backprojection Networks
- Author
-
Zongxu Pan, Jiayi Guo, Wen Ma, and Bin Lei
- Subjects
Computer science ,Remote sensing application ,0211 other engineering and technologies ,02 engineering and technology ,Iterative reconstruction ,Residual ,Superresolution ,Data set ,General Earth and Planetary Sciences ,RGB color model ,Electrical and Electronic Engineering ,Image resolution ,021101 geological & geomatics engineering ,Remote sensing ,Block (data storage) - Abstract
High-resolution (HR) images are always preferred for many remote sensing applications, which can be obtained from their low-resolution (LR) counterparts via a technique referred to as super-resolution (SR). Among SR approaches, single image SR (SISR) methods aim at reconstructing the HR image from only one LR image. In this paper, a residual dense backprojection network (RDBPN)-based SISR method is proposed to promote the resolution of RGB remote sensing images with median- and large-scale factors. The proposed network consists of several residual dense backprojection blocks that contain two kinds of modules, named the upprojection module and the downprojection module, and these modules are densely connected in one block. Different from the chain-connected backprojection structure, the proposed method applies a residual backprojection block structure, which can utilize residual learning in both global and local manners. We further simplify the network by replacing the downprojection unit with the downscaling unit to accelerate the speed of reconstruction, and this implementation is called fast RDBPN (FRDBPN). Several experiments under the UC Merced data set are conducted to validate the effectiveness of the proposed method, and the results indicate that: 1) the proposed residual block structure is superior to the chain-connected structure; 2) FRDBPN achieves a speedup of about 1.3 times with similar and even better-reconstructed performance in comparison with RDBPN; and 3) RDBPN and FRDBPN outperform several state-of-the-art methods in terms of both quantitative evaluation and visual quality.
- Published
- 2019
- Full Text
- View/download PDF
28. Achieving Super-Resolution Remote Sensing Images via the Wavelet Transform Combined With the Recursive Res-Net
- Author
-
Wen Ma, Jiayi Guo, Zongxu Pan, and Bin Lei
- Subjects
Normalization (statistics) ,Computer science ,business.industry ,Deep learning ,0211 other engineering and technologies ,Normalization (image processing) ,Wavelet transform ,Pattern recognition ,02 engineering and technology ,Iterative reconstruction ,Residual ,Wavelet ,Frequency domain ,General Earth and Planetary Sciences ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image resolution ,021101 geological & geomatics engineering - Abstract
Deep learning (DL) has been successfully applied to single image super-resolution (SISR), which aims at reconstructing a high-resolution (HR) image from its low-resolution (LR) counterpart. Different from most current DL-based methods, which perform reconstruction in the spatial domain, we use a scheme based in the frequency domain to reconstruct the HR image at various frequency bands. Further, we propose a method that incorporates the wavelet transform (WT) and the recursive Res-Net. The WT is applied to the LR image to divide it into various frequency components. Then, an elaborately designed network with recursive residual blocks is used to predict high-frequency components. Finally, the reconstructed image is obtained via the inverse WT. This paper has three main contributions: 1) an SISR scheme based on the frequency domain is proposed under a DL framework to fully exploit the potential to depict images at different frequency bands; 2) recursive block and residual learning in global and local manners are adopted to ease the training of the deep network, and the batch normalization layer is removed to increase the flexibility of the network, save memory, and promote speed; and 3) the low-frequency wavelet component is replaced by an LR image with more details to further improve performance. To validate the effectiveness of the proposed method, extensive experiments are performed using the NWPU-RESISC45 data set, and the results demonstrate that the proposed method outperforms several state-of-the-art methods in terms of both objective evaluation and subjective perspective.
- Published
- 2019
- Full Text
- View/download PDF
29. Infrared Dim and Small Target Detection from Complex Scenes via Multi-Frame Spatial–Temporal Patch-Tensor Model
- Author
-
Yuxin Hu, Yapeng Ma, Zongxu Pan, and Yuhan Liu
- Subjects
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Earth and Planetary Sciences ,infrared image sequences ,dim and small target detection ,complex background - Abstract
Infrared imaging plays an important role in space-based early warning and anti-missile guidance due to its particular imaging mechanism. However, the signal-to-noise ratio of the infrared image is usually low and the target is moving, which makes most of the existing methods perform inferiorly, especially in very complex scenes. To solve these difficulties, this paper proposes a novel multi-frame spatial–temporal patch-tensor (MFSTPT) model for infrared dim and small target detection from complex scenes. First, the method of simultaneous sampling in spatial and temporal domains is adopted to make full use of the information between multi-frame images, establishing an image-patch tensor model that makes the complex background more in line with the low-rank assumption. Secondly, we propose utilizing the Laplace method to approximate the rank of the tensor, which is more accurate. Third, to suppress strong interference and sparse noise, a prior weighted saliency map is established through a weighted local structure tensor, and different weights are assigned to the target and background. Using an alternating direction method of multipliers (ADMM) to solve the model, we can accurately separate the background and target components and acquire the detection results. Through qualitative and quantitative analysis, experimental results of multiple real sequences verify the rationality and effectiveness of the proposed algorithm.
- Published
- 2022
- Full Text
- View/download PDF
30. A Hybrid and Explainable Deep Learning Framework for SAR Images
- Author
-
Zhongling Huang, Bin Lei, Mihai Datcu, and Zongxu Pan
- Subjects
Synthetic aperture radar ,010504 meteorology & atmospheric sciences ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,02 engineering and technology ,Land cover ,Complex-valued SAR Data ,01 natural sciences ,Convolutional neural network ,Image (mathematics) ,Deep Learning ,Physical Scattering Properties ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Artificial neural network ,Contextual image classification ,Topic Modeling ,business.industry ,Scattering ,Deep learning ,Pattern recognition ,Patch-wise Classification ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science::Graphics ,Artificial intelligence ,business ,Classifier (UML) - Abstract
Deep learning based patch-wise Synthetic Aperture Radar (SAR) image classification usually requires a large number of labeled data for training. Aiming at understanding SAR images with very limited annotation and taking full advantage of complex-valued SAR data, this paper proposes a general and practical framework for quad-, dual-, and single-polarized SAR data. In this framework, two important elements are taken into consideration: image representation and physical scattering properties. Firstly, a convolutional neural network is applied for SAR image representation. Based on time-frequency analysis and polarimetric decomposition, the scattering labels are extracted from complex SAR data with unsupervised deep learning. Then, a bag of scattering topics for a patch is obtained via topic modeling. By assuming that the generated scattering topics can be regarded as the abstract attributes of SAR images, we propose a soft constraint between scattering topics and image representations to refine the network. Finally, a classifier for land cover and land use semantic labels can be learned with only a few annotated samples. The framework is hybrid for the combination of deep neural network and explainable approaches. Experiments are conducted on Gaofen-3 complex SAR data and the results demonstrate the effectiveness of our proposed framework.
- Published
- 2020
- Full Text
- View/download PDF
31. Classification of Large-Scale High-Resolution SAR Images with Deep Transfer Learning
- Author
-
Corneliu Octavian Dumitru, Mihai Datcu, Bin Lei, Zhongling Huang, and Zongxu Pan
- Subjects
Synthetic aperture radar ,Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,0211 other engineering and technologies ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale (descriptive set theory) ,label noise ,02 engineering and technology ,Land cover ,Overfitting ,transfer learning ,Data modeling ,land cover classification ,TerraSAR-X (TSX) ,Radar polarimetry ,FOS: Electrical engineering, electronic engineering, information engineering ,Training ,Electrical and Electronic Engineering ,Electrical Engineering and Systems Science - Signal Processing ,021101 geological & geomatics engineering ,EO Data Science ,Learning systems ,business.industry ,High-resolution (HR) synthetic aperture radar (SAR) images ,Data models ,Pattern recognition ,Remote sensing ,Geotechnical Engineering and Engineering Geology ,Data set ,ComputingMethodologies_PATTERNRECOGNITION ,Task analysis ,Artificial intelligence ,Noise (video) ,Transfer of learning ,business - Abstract
The classification of large-scale high-resolution synthetic aperture radar (SAR) land cover images acquired by satellites is a challenging task, facing several difficulties such as semantic annotation with expertise, changing data characteristics due to varying imaging parameters or regional target area differences, and complex scattering mechanisms being different from optical imaging. Given a large-scale SAR land cover data set collected from TerraSAR-X images with a hierarchical three-level annotation of 150 categories and comprising more than 100 000 patches, three main challenges in automatically interpreting SAR images of highly imbalanced classes, geographic diversity, and label noise are addressed. In this letter, a deep transfer learning method is proposed based on a similarly annotated optical land cover data set (NWPU-RESISC45). Besides, a top-2 smooth loss function with cost-sensitive parameters was introduced to tackle the label noise and imbalanced classes’ problems. The proposed method shows high efficiency in transferring information from a similarly annotated remote sensing data set, a robust performance on highly imbalanced classes, and is alleviating the overfitting problem caused by label noise. What is more, the learned deep model has a good generalization for other SAR-specific tasks, such as MSTAR target recognition with a state-of-the-art classification accuracy of 99.46%.
- Published
- 2020
- Full Text
- View/download PDF
32. Projection Shape Template-Based Ship Target Recognition in TerraSAR-X Images
- Author
-
Xiaolan Qiu, Zongxu Pan, Yueting Zhang, Bin Lei, and Jiwei Zhu
- Subjects
Synthetic aperture radar ,020301 aerospace & aeronautics ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,02 engineering and technology ,Geotechnical Engineering and Engineering Geology ,Field (computer science) ,Image (mathematics) ,0203 mechanical engineering ,Robustness (computer science) ,Feature (machine learning) ,Computer vision ,Template based ,Artificial intelligence ,Electrical and Electronic Engineering ,Projection (set theory) ,business ,021101 geological & geomatics engineering - Abstract
Ship target recognition has always been a hot issue in the field of ocean surveillance. Due to the serious shortage of samples in ship target recognition for synthetic aperture radar (SAR) images, the template-based method is still one of the most effective ways to solve the problem. In this letter, we put forward a novel ship recognition method based on the projection shape template (PST), aiming at increasing both the accuracy and the robustness of the recognition. The PST of each category is calculated by projecting the 3-D model obtained from the two-view images of the target to the 2-D slant-plane image according to the SAR imaging model. Then, we propose a contour extraction method to detect the profile of ships, which served as the feature. Finally, the identity of the query ship is obtained through contour matching. Experimental results indicate that the proposed method is effective even when the number of samples is extremely small, consequently providing a promising way for the automatic interpretation of ship targets in the SAR images.
- Published
- 2017
- Full Text
- View/download PDF
33. Drbox Family: A Group of Object Detection Techniques for Remote Sensing Images
- Author
-
Zongxu Pan, Yizhao Gao, Guowei Chen, and Lei Liu
- Subjects
Orientation (computer vision) ,Computer science ,business.industry ,Deep learning ,0211 other engineering and technologies ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Object detection ,Feature (computer vision) ,Minimum bounding box ,Pyramid ,Pyramid (image processing) ,Artificial intelligence ,business ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Remote sensing - Abstract
Objects in remote sensing images are difficult to detect due to arbitrarily rotated angles and the wide variance of scales. As bounding box plays an important role in object detection, we proposes a new bounding box type named rotated bounding box (rBox). With the application of rBox, we have proposed a series of detection techniques (DrBox, DrBoxLight, DrBoxSemi, DrBoxPro) to effectively handle the situation where the orientation angles of the objects are arbitrary. This article is a brief overview of these techniques. The original DrBox detector applies VGG-net as its main network framework, with image pyramid input to address multi-scale problem. DrBoxLight is a mini version of DrBox, which applies MobileNet and knowledge distillation to be deployed on embedded devices. DrBoxSemi is a semi-supervised version of DrBox, so annotation of all training samples is no longer necessary. DrBoxPro is the most important update for DrBox with professional designing of abundant prior-rBoxes on feature pyramid networks. In our experiments, we demonstrated how rBox helps to improve the performance of object detection compared with traditional bounding boxes. Besides, we evaluated the performance of our DrBox family on a series of object detection tasks.
- Published
- 2019
- Full Text
- View/download PDF
34. Siamese Network Based Metric Learning for SAR Target Classification
- Author
-
Bowei Wang, Quanzhi An, Bin Lei, Zongxu Pan, Yueting Zhang, and Xianjie Bao
- Subjects
Scheme (programming language) ,Measure (data warehouse) ,Training set ,Computer science ,business.industry ,Sample (statistics) ,Pattern recognition ,Convolutional neural network ,Similarity (network science) ,Metric (mathematics) ,Artificial intelligence ,business ,computer ,computer.programming_language - Abstract
A Siamese network based metric learning method is proposed for SAR target classification with few training samples. The network consists of two identical CNNs sharing the weights. Different from classification networks that predict the category of one sample, the Siamese network implements a metric learning to measure the similarity between two samples. Since the input is the sample pair, the amount of training data dramatically increases which contributes to training a better network. When generating the pairs, a hard negative mining scheme is proposed for improving the performance. To avoid computing the similarity between the test sample and each training sample at the test stage, which is time consuming, a two stages scheme is employed with an additional classification network taking the output of the single branch of Siamese network as the input and predicting the category. Experiments on the MSTAR dataset validate the effectiveness of the proposed method.
- Published
- 2019
- Full Text
- View/download PDF
35. SAR Image Simulation by Generative Adversarial Networks
- Author
-
Xianjie Bao, Lei Liu, Zongxu Pan, and Bin Lei
- Subjects
Structure (mathematical logic) ,Series (mathematics) ,Computer science ,business.industry ,fungi ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Process (computing) ,Image processing ,Pattern recognition ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Image (mathematics) ,body regions ,Clipping (photography) ,Distortion ,Artificial intelligence ,skin and connective tissue diseases ,business ,Joint (audio engineering) ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
SAR image simulation plays an important role in the process of SAR target interpretation and recognition, especially when the number of SAR images is limited. Due to the restriction of acquisition process, the numbers of SAR target images are always insufficient. The traditional SAR image simulation, which is based on calculation of electromagnetic theory, is easily to be affected by parameter distortion due to the lack of joint optimization. Consequently, it makes a big effect on the quality of the simulated images. This paper presents a novel approach, end-to-end models, to simulate the desired images from the SAR image database. A series of generative adversarial networks include DCGAN, weight clipping WGAN and WGAN with gradient penalty are optimized and applied to generate typical SAR target images. Three kinds of network structures are used, include structure of DCGAN, newly proposed structure of four residual blocks networks and Resnet. Experimental results show that the proposed novel method is not only efficient for SAR image simulation, but also can generate excellent SAR images. Furthermore, we analysis the results and the characteristics of different networks, which pave a good way for SAR image simulation based on artificial intelligence method.
- Published
- 2019
- Full Text
- View/download PDF
36. What, Where and How to Transfer in SAR Target Recognition Based on Deep CNNs
- Author
-
Zongxu Pan, Bin Lei, and Zhongling Huang
- Subjects
Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Computer science ,business.industry ,Remote sensing application ,Computer Vision and Pattern Recognition (cs.CV) ,0211 other engineering and technologies ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,Convolutional neural network ,Image (mathematics) ,Task (project management) ,FOS: Electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,Artificial intelligence ,Electrical and Electronic Engineering ,Electrical Engineering and Systems Science - Signal Processing ,Transfer of learning ,business ,021101 geological & geomatics engineering - Abstract
Deep convolutional neural networks (DCNNs) have attracted much attention in remote sensing recently. Compared with the large-scale annotated dataset in natural images, the lack of labeled data in remote sensing becomes an obstacle to train a deep network very well, especially in SAR image interpretation. Transfer learning provides an effective way to solve this problem by borrowing the knowledge from the source task to the target task. In optical remote sensing application, a prevalent mechanism is to fine-tune on an existing model pre-trained with a large-scale natural image dataset, such as ImageNet. However, this scheme does not achieve satisfactory performance for SAR application because of the prominent discrepancy between SAR and optical images. In this paper, we attempt to discuss three issues that are seldom studied before in detail: (1) what network and source tasks are better to transfer to SAR targets, (2) in which layer are transferred features more generic to SAR targets and (3) how to transfer effectively to SAR targets recognition. Based on the analysis, a transitive transfer method via multi-source data with domain adaptation is proposed in this paper to decrease the discrepancy between the source data and SAR targets. Several experiments are conducted on OpenSARShip. The results indicate that the universal conclusions about transfer learning in natural images cannot be completely applied to SAR targets, and the analysis of what and where to transfer in SAR target recognition is helpful to decide how to transfer more effectively.
- Published
- 2019
- Full Text
- View/download PDF
37. CPS-Det: An Anchor-Free Based Rotation Detector for Ship Detection
- Author
-
Yi Yang, Yuxin Hu, Chibiao Ding, and Zongxu Pan
- Subjects
Computer science ,business.industry ,ship detection ,Science ,Deep learning ,Detector ,Frame (networking) ,0211 other engineering and technologies ,02 engineering and technology ,Minimum bounding box ,Feature (computer vision) ,Weight distribution ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,anchor-free method ,020201 artificial intelligence & image processing ,Point (geometry) ,Artificial intelligence ,remote sensing images ,business ,rotation object detection ,Rotation (mathematics) ,Algorithm ,021101 geological & geomatics engineering - Abstract
Ship detection is a significant and challenging task in remote sensing. At present, due to the faster speed and higher accuracy, the deep learning method has been widely applied in the field of ship detection. In ship detection, targets usually have the characteristics of arbitrary-oriented property and large aspect ratio. In order to take full advantage of these features to improve speed and accuracy on the base of deep learning methods, this article proposes an anchor-free method, which is referred as CPS-Det, on ship detection using rotatable bounding box. The main improvements of CPS-Det as well as the contributions of this article are as follows. First, an anchor-free based deep learning network was used to improve speed with fewer parameters. Second, an annotation method of oblique rectangular frame is proposed, which solves the problem that periodic angle and bounded coordinates in conjunction with the regression calculation can lead to the problem of loss anomalies. For the annotation scheme proposed in this paper, a scheme for calculating Angle Loss is proposed, which makes the loss function of angle near the boundary value more accurate and greatly improves the accuracy of angle prediction. Third, the centerness calculation of feature points is optimized in this article so that the center weight distribution of each point is suitable for the rotation detection. Finally, a scheme combining centerness and positive sample screening is proposed and its effectiveness in ship detection is proved. Experiments on remote sensing public dataset HRSC2016 show the effectiveness of our approach.
- Published
- 2021
- Full Text
- View/download PDF
38. Generative Adversarial Learning in YUV Color Space for Thin Cloud Removal on Satellite Imagery
- Author
-
Zongxu Pan, Jiayin Liu, Xue Wen, and Yuxin Hu
- Subjects
Discriminator ,010504 meteorology & atmospheric sciences ,Computer science ,Science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Cloud computing ,02 engineering and technology ,transfer learning ,Color space ,Residual ,01 natural sciences ,Luminance ,Computer vision ,residual encoding-decoding network ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Network architecture ,Pixel ,business.industry ,generative adversarial network ,RGB color space ,General Earth and Planetary Sciences ,Artificial intelligence ,thin cloud removal ,business - Abstract
Clouds are one of the most serious disturbances when using satellite imagery for ground observations. The semi-translucent nature of thin clouds provides the possibility of 2D ground scene reconstruction based on a single satellite image. In this paper, we propose an effective framework for thin cloud removal involving two aspects: a network architecture and a training strategy. For the network architecture, a Wasserstein generative adversarial network (WGAN) in YUV color space called YUV-GAN is proposed. Unlike most existing approaches in RGB color space, our method performs end-to-end thin cloud removal by learning luminance and chroma components independently, which is efficient at reducing the number of unrecoverable bright and dark pixels. To preserve more detailed features, the generator adopts a residual encoding–decoding network without down-sampling and up-sampling layers, which effectively competes with a residual discriminator, encouraging the accuracy of scene identification. For the training strategy, a transfer-learning-based method was applied. Instead of using either simulated or scarce real data to train the deep network, adequate simulated pairs were used to train the YUV-GAN at first. Then, pre-trained convolutional layers were optimized by real pairs to encourage the applicability of the model to real cloudy images. Qualitative and quantitative results on RICE1 and Sentinel-2A datasets confirmed that our YUV-GAN achieved state-of-the-art performance compared with other approaches. Additionally, our method combining the YUV-GAN with a transfer-learning-based training strategy led to better performance in the case of scarce training data.
- Published
- 2021
- Full Text
- View/download PDF
39. Relaxation Labelling Based Land Masking in SAR Images
- Author
-
Zongxu Pan, Bin Lei, and Lei Liu
- Subjects
Synthetic aperture radar ,Masking (art) ,Pixel ,Computer science ,Iterative method ,business.industry ,Pattern recognition ,Image segmentation ,Relaxation labelling ,body regions ,Probability distribution ,Segmentation ,Artificial intelligence ,business - Abstract
In this paper, a relaxation labelling based land masking method is proposed for separating sea and land in SAR images. Land masking, also known as sea-land segmentation, is a part of ship detection system for SAR images to avoid detecting false alarms in the land. Relaxation labeling is an iterative method, which can separate foreground pixels from background ones using the neighborhood information of pixels in the image. When relaxation labelling converges, the segmented result is often unsatisfactory, since it tends to label more foreground pixels. To overcome this issue, a loss composed of the background probability distribution diversity and the gradient magnitude of the result is introduced to indicate when to stop the iteration. Experimental results on several Gaofen-3 SAR images demonstrate the effectiveness of the proposed method.
- Published
- 2018
- Full Text
- View/download PDF
40. Identity Regularized Sparse Representation for Automatic Target Recognition in Sar Images
- Author
-
Lei Liu, Zongxu Pan, and Bin Lei
- Subjects
Synthetic aperture radar ,Training set ,Linear programming ,Computer science ,business.industry ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Sparse approximation ,Support vector machine ,Automatic target recognition ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Neural coding ,021101 geological & geomatics engineering - Abstract
An identity regularized sparse representation (IRSR) based SAR target recognition method is proposed in this paper. The method aims to find a transformation that can map the data to a transformed space, in which targets from the same class are close with each other, no matter the distance of them in the original space. This identity constraint can be formulated as a l 1 -norm minimization problem. By decoupling the problem into the sparse coding problem and the dictionary learning problem, the solution can be obtained iteratively. The solution is simply the weighted average of the sparse coding of all training data. Experimental results demonstrate that the proposed method is superior to several related methods.
- Published
- 2018
- Full Text
- View/download PDF
41. Salient Seed Extraction Based Target Detection in SAR Images
- Author
-
Zongxu Pan and Bin Lei
- Subjects
Synthetic aperture radar ,Pixel ,Computer science ,business.industry ,Feature extraction ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,Object detection ,Constant false alarm rate ,Salient ,0202 electrical engineering, electronic engineering, information engineering ,Clutter ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Image resolution ,021101 geological & geomatics engineering - Abstract
A salient seed extraction based target detection method is proposed in this paper, aiming to distinguish target points from background points in SAR images. Different from recent superpixel based method which generates superpixels firstly, and for each superpixel decides whether it belongs to part of a target. The proposed method employs a salient point to region scheme. At first, salient seeds are extracted by mean-shift and region feature based approach. Then, pixels are assigned to the most similar seed and those assigned to the salient seeds are extracted to form the foreground region. Finally, constant false alarm rate (CFAR) operation is employed to detect the target points from the foreground region. The effectiveness of the proposed method is validated by comparing with five state-of-the-art methods on TerraSAR-X images.
- Published
- 2018
- Full Text
- View/download PDF
42. SAR Target Classification with CycleGAN Transferred Simulated Samples
- Author
-
Xiaolan Qiu, Lei Liu, Zongxu Pan, and Lingxiao Peng
- Subjects
Synthetic aperture radar ,Computer science ,business.industry ,Deep learning ,0211 other engineering and technologies ,Pattern recognition ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Automatic target recognition ,Artificial intelligence ,business ,Classifier (UML) ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
Target classification is an important part in automatic target recognition (ATR) systems. Deep learning methods get state of the art performance in SAR target classification. Simulation is a useful data augmentation method when the numbers of real samples for training is not sufficient. This article discusses how to release the full potential of simulated samples which is used to improve performance of SAR target classifier. The proposed method is based on cycle adversarial network (CycleGAN), which can transfer simulated samples to be more similar with real samples in image domain. Experiments show that adding simulated samples straightforward into training dataset is not helpful to improve the performance. However, adding the transferred simulated samples for training results in about 10% increase in accuracy in the designed SAR airplane classification experiment, compared with training without data augmentation.
- Published
- 2018
- Full Text
- View/download PDF
43. Inshore Ship Detection in Sar Images Based on Deep Neural Networks
- Author
-
Quanzhi An, Zongxu Pan, Guowei Chen, Lei Liu, and Bin Lei
- Subjects
Synthetic aperture radar ,010504 meteorology & atmospheric sciences ,Computer science ,business.industry ,0211 other engineering and technologies ,02 engineering and technology ,Image segmentation ,01 natural sciences ,Object detection ,Image (mathematics) ,Minimum bounding box ,Computer vision ,Satellite ,Segmentation ,Artificial intelligence ,business ,Image resolution ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
Inshore ship detection in SAR image faces difficulties on correctly identifying near-shore ships and onshore objects. This article proposes a multi-scale full convolutional network (MS-FCN) based sea-land segmentation method and applies a rotatable bounding box based object detection method (DR-Box) to solve the inshore ship detection problem. The sea region and land region are separated by MS-FCN then DR-Box is applied on sea region. The proposed method combines global information and local information of SAR image to achieve high accuracy. The networks are trained with Chinese Gaofen-3 satellite images. Experiments on the testing image show most inshore ships are successfully located by the proposed method.
- Published
- 2018
- Full Text
- View/download PDF
44. Super-Resolution of Remote Sensing Images Based on Transferred Generative Adversarial Network
- Author
-
Zongxu Pan, Wen Ma, Jiayi Guo, and Bin Lei
- Subjects
business.industry ,Remote sensing application ,Computer science ,Deep learning ,0211 other engineering and technologies ,Normalization (image processing) ,Novelty ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Image resolution ,Generative adversarial network ,021101 geological & geomatics engineering ,Remote sensing - Abstract
Single image super-resolution (SR) has been widely studied in recent years as a crucial technique for remote sensing applications. This paper proposes a SR method for remote sensing images based on a transferred generative adversarial network (TGAN). Different from the previous GAN-based SR approaches, the novelty of our method mainly reflects from two aspects. First, the batch normalization layers are removed to reduce the memory consumption and the computational burden, as well as raising the accuracy. Second, our model is trained in a transfer-learning fashion to cope with the insufficiency of training data, which is the crux of applying deep learning methods to remote sensing applications. The model is firstly trained on an external dataset DIV2K and further fine-tuned with the remote sensing dataset. Our experimental results demonstrate that the proposed method is superior to SRCNN and SRGAN in terms of both the objective evaluation and the subjective perspective.
- Published
- 2018
- Full Text
- View/download PDF
45. Semi-Supervised Object Detection in Remote Sensing Images Using Generative Adversarial Networks
- Author
-
Guowei Chen, Lei Liu, Wenlong Hu, and Zongxu Pan
- Subjects
Computer science ,business.industry ,Supervised learning ,0211 other engineering and technologies ,02 engineering and technology ,010502 geochemistry & geophysics ,Object (computer science) ,Machine learning ,computer.software_genre ,01 natural sciences ,Object detection ,Task (project management) ,Adversarial system ,Artificial intelligence ,business ,computer ,Generative grammar ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
Object detection is a challenging task in computer vision. Now many detection networks can get a good detection result when applying large training dataset. However, annotating sufficient amount of data for training is often time-consuming. To address this problem, a semi-supervised learning based method is proposed in this paper. Semi-supervised learning trains detection networks with few annotated data and massive amount of unannotated data. In the proposed method, Generative Adversarial Network is applied to extract data distribution from unannotated data. The extracted information is then applied to improve the performance of detection network. Experiment shows that the method in this paper greatly improves the detection performance compared w1ith supervised learning using only few annotated data. The results prove that it is possible to achieve acceptable detection result when only few target object is annotated in the training dataset.
- Published
- 2018
- Full Text
- View/download PDF
46. An iterative method for shadow enhancement in high resolution SAR images
- Author
-
Jiayi Guo, Lei Liu, Zongxu Pan, Qi Liu, Chibiao Ding, Fangfang Li, Bin Lei, and Yueting Zhang
- Subjects
Synthetic aperture radar ,Pixel ,Iterative method ,Computer science ,business.industry ,020208 electrical & electronic engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,High resolution ,02 engineering and technology ,law.invention ,Automatic target recognition ,law ,Shadow ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,Radar ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,021101 geological & geomatics engineering - Abstract
The edges of shadows are blurred in Synthetic Aperture Radar (SAR) images due to the moving of the radar when data are collected. This phenomenon becomes obvious in High Resolution (HR) SAR images. In this work, an adaptive approach for shadow enhancements is proposed. The performance of the shadow enhancement has some relationship with the precision of the estimation of the height and this rule is used in this work. The Height-Variant Phase Compensation (HVPC) and golden section algorithm are employed. The adaptive method is built by iterative progress of calculating the quantities of pixels corresponding to the shadow region. This method provides an automatic way for shadow enhancements and it is suitable for objects mainly composed of flat-like structures. The experiments based on the Mini-SAR of a helicopter are implemented to test the validity of the approach. The work in this paper provides a way for shadow enhancement for HR SAR images and would be useful for SAR Automatic Target Recognition.
- Published
- 2017
- Full Text
- View/download PDF
47. Super-Resolution Based on Compressive Sensing and Structural Self-Similarity for Remote Sensing Images
- Author
-
Weidong Sun, Zongxu Pan, Shaoxing Hu, Huijuan Huang, Aiwu Zhang, Hongbing Ma, and Jing Yu
- Subjects
Image fusion ,Self-similarity ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Iterative reconstruction ,Sparse approximation ,Matching pursuit ,Upsampling ,Compressed sensing ,Kernel (image processing) ,General Earth and Planetary Sciences ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Remote sensing ,Interpolation - Abstract
A super-resolution (SR) method based on compressive sensing (CS), structural self-similarity (SSSIM), and dictionary learning is proposed for reconstructing remote sensing images. This method aims to identify a dictionary that represents high resolution (HR) image patches in a sparse manner. Extra information from similar structures which often exist in remote sensing images can be introduced into the dictionary, thereby enabling an HR image to be reconstructed using the dictionary in the CS framework. We use the K-Singular Value Decomposition method to obtain the dictionary and the orthogonal matching pursuit method to derive sparse representation coefficients. To evaluate the effectiveness of the proposed method, we also define a new SSSIM index, which reflects the extent of SSSIM in an image. The most significant difference between the proposed method and traditional sample-based SR methods is that the proposed method uses only a low-resolution image and its own interpolated image instead of other HR images in a database. We simulate the degradation mechanism of a uniform 2 × 2 blur kernel plus a downsampling by a factor of 2 in our experiments. Comparative experimental results with several image-quality-assessment indexes show that the proposed method performs better in terms of the SR effectivity and time efficiency. In addition, the SSSIM index is strongly positively correlated with the SR quality.
- Published
- 2013
- Full Text
- View/download PDF
48. Super resolution of remote sensing image based on structure similarity in CS frame
- Author
-
Weidong Sun, Zongxu Pan, and Huijuan Huang
- Subjects
Image fusion ,Similarity (geometry) ,business.industry ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Sample (graphics) ,Associative array ,Geography ,Compressed sensing ,Computer vision ,Artificial intelligence ,Neural coding ,business ,Remote sensing ,Interpolation - Abstract
In this paper, a novel super resolution (SR) method for re mote sensing images based on compressive sensing (CS), structure similarity and dictionary learning is proposed. Th e basic idea is to find a dictionary which can represent the high resolution (HR) image patches in a sparse way. The extra information coming from the similar structures which often exist in remote sensing images can be learned into the dictionary, so we can get the reconstructed HR image through the dictionary in the CS frame due to the redundance in the image which has a sparse form in the dictionary. We use K-SVD algorithm to find the dictionary and OMP method to reveal the sparse coding coe fficients location and value. The difference between our method and th e previous sample-based SR method is that we only use low-resolution image and the interpolation image from itself rather than other HR images. Experiments on bo th optical and laser remote sensing images show that our method is better than the original CS-based method in terms of not only the effect but also the running time. Keywords: image super resolution, structure similarity, compressive sensing, dictionary learning, remote sensing images
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.