322 results on '"cloud removal"'
Search Results
2. AIR-POLSAR-CR1.0: A Benchmark Dataset for Cloud Removal in High-Resolution Optical Remote Sensing Images with Fully Polarized SAR.
- Author
-
Wang, Yuxi, Zhang, Wenjuan, Pan, Jie, Jiang, Wen, Yuan, Fangyan, Zhang, Bo, Yue, Xijuan, and Zhang, Bing
- Subjects
- *
OPTICAL remote sensing , *MACHINE learning , *DEEP learning , *REMOTE sensing , *IMAGE reconstruction , *SYNTHETIC aperture radar , *SYNTHETIC apertures - Abstract
Due to the all-time and all-weather characteristics of synthetic aperture radar (SAR) data, they have become an important input for optical image restoration, and various cloud removal datasets based on SAR-optical have been proposed. Currently, the construction of multi-source cloud removal datasets typically employs single-polarization or dual-polarization backscatter SAR feature images, lacking a comprehensive description of target scattering information and polarization characteristics. This paper constructs a high-resolution remote sensing dataset, AIR-POLSAR-CR1.0, based on optical images, backscatter feature images, and polarization feature images using the fully polarimetric synthetic aperture radar (PolSAR) data. The dataset has been manually annotated to provide a foundation for subsequent analyses and processing. Finally, this study performs a performance analysis of typical cloud removal deep learning algorithms based on different categories and cloud coverage on the proposed standard dataset, serving as baseline results for this benchmark. The results of the ablation experiment also demonstrate the effectiveness of the PolSAR data. In summary, AIR-POLSAR-CR1.0 fills the gap in polarization feature images and demonstrates good adaptability for the development of deep learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. CloudTran++: Improved Cloud Removal from Multi-Temporal Satellite Images Using Axial Transformer Networks.
- Author
-
Christopoulos, Dionysis, Ntouskos, Valsamis, and Karantzalos, Konstantinos
- Subjects
- *
REMOTE-sensing images , *IMAGE reconstruction , *AUTOREGRESSIVE models , *OPTICAL images , *IMAGE quality analysis - Abstract
We present a method for cloud removal from satellite images using axial transformer networks. The method considers a set of multi-temporal images in a given region of interest, together with the corresponding cloud masks, and produces a cloud-free image for a specific day of the year. We propose the combination of an encoder-decoder model employing axial attention layers for the estimation of the low-resolution cloud-free image, together with a fully parallel upsampler that reconstructs the image at full resolution. The method is compared with various baselines and state-of-the-art methods on Sentinel-2 datasets of different coverage, showing significant improvements across multiple standard metrics used for image quality assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. A Multi-Level SAR-Guided Contextual Attention Network for Satellite Images Cloud Removal.
- Author
-
Liu, Ganchao, Qiu, Jiawei, and Yuan, Yuan
- Subjects
- *
SYNTHETIC aperture radar , *OPTICAL images , *IMAGE reconstruction , *REMOTE-sensing images , *REMOTE sensing - Abstract
In the field of remote sensing, cloud cover severely reduces the quality of satellite observations of the earth. Due to the complete absence of information in cloud-covered regions, cloud removal with a single optical image is an ill-posed problem. Since the synthetic aperture radar (SAR) can effectively penetrate clouds, fusing SAR and optical remote sensing images will effectively alleviate this problem. However, existing SAR-based optical cloud removal methods fail to effectively leverage the global information provided by the SAR image, resulting in limited performance gains. In this paper, we introduce a novel cloud removal method named the Multi-Level SAR-Guided Contextual Attention Network (MSGCA-Net). MSGCA-Net is designed with a multi-level architecture that integrates a SAR-Guided Contextual Attention (SGCA) module to fuse the dependable global contextual information from SAR images with the local features of optical images effectively. In the module of SGCA, the SAR image provides reliable global contextual information and genuine structure of cloud-covered regions, while the optical image provides the local feature information. The proposed model can efficiently extract and fuse global and local contextual information in SAR and optical images. We trained and evaluated the performance of the model on both simulated and real-world datasets. Both qualitative and quantitative experimental evaluation demonstrated that the proposed method can yield high quality cloud-free images and outperform state-of-the-art cloud removal methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Two-Level Supervised Network for Small Ship Target Detection in Shallow Thin Cloud-Covered Optical Satellite Images.
- Author
-
Liu, Fangjian, Zhang, Fengyi, Wang, Mi, and Xu, Qizhi
- Subjects
CLOUDINESS ,REMOTE sensing ,OPTICAL images ,DATA quality ,DETECTORS - Abstract
Ship detection under cloudy and foggy conditions is a significant challenge in remote sensing satellite applications, as cloud cover often reduces contrast between targets and backgrounds. Additionally, ships are small and affected by noise, making them difficult to detect. This paper proposes a Cloud Removal and Target Detection (CRTD) network to detect small ships in images with thin cloud cover. The process begins with a Thin Cloud Removal (TCR) module for image preprocessing. The preprocessed data are then fed into a Small Target Detection (STD) module. To improve target–background contrast, we introduce a Target Enhancement module. The TCR and STD modules are integrated through a dual-stage supervision network, which hierarchically processes the detection task to enhance data quality, minimizing the impact of thin clouds. Experiments on the GaoFen-4 satellite dataset show that the proposed method outperforms existing detectors, achieving an average precision (AP) of 88.9%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. CLOUD REMOVAL BASED ON DARK CHANNEL PRIOR: A SYSTEMATIC LITERATURE REVIEW.
- Author
-
Hamidiyati, Nazifa and Rahadianti, Laksmita
- Subjects
- *
REMOTE-sensing images , *AGRICULTURAL engineering , *URBAN heat islands , *FOREST fire detection , *ENVIRONMENTAL protection - Abstract
Remote sensing satellite technology has revolutionized the way we gather information about our planet. Through the use of advanced imaging capabilities, satellite images have become invaluable in various aspects of daily life. These images are extensively utilized in environmental protection, agricultural engineering, and other fields. Remote sensing satellite maps are used for tasks such as geological mapping, monitoring urban heat islands, environmental surveillance, and detecting forest fires from remote sensing images. However, clouds present a significant hindrance when utilizing satellite imagery for ground observations, as they obstruct the view and can limit the accuracy of the analysis. While there are numerous advanced state-of-the-art approaches available, it is important to note that they often require a substantial amount of data for training. On the other hand, if a more general approach is desired without the need for extensive training data, pixel-based methods provide a viable option. One of the widely used pixel-based methods for cloud removal in satellite images is Dark Channel Prior (DCP). DCP is often combined with other methods to improve the image quality. This systematic literature review will demonstrate the development of the DCP method in cloud removal from satellite images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Cloud Removal in the Tibetan Plateau Region Based on Self-Attention and Local-Attention Models.
- Author
-
Zheng, Guoqiang, Zhao, Tianle, and Liu, Yaohui
- Subjects
- *
OPTICAL remote sensing , *HYDROLOGIC cycle , *CLOUDINESS , *DEEP learning , *SELECTIVITY (Psychology) , *SNOW cover - Abstract
Optical remote sensing images have a wide range of applications but are often affected by cloud cover, which interferes with subsequent analysis. Therefore, cloud removal has become indispensable in remote sensing data processing. The Tibetan Plateau, as a sensitive region to climate change, plays a crucial role in the East Asian water cycle and regional climate due to its snow cover. However, the rich ice and snow resources, rapid snow condition changes, and active atmospheric convection in the plateau as well as its surrounding mountainous areas, make optical remote sensing prone to cloud interference. This is particularly significant when monitoring snow cover changes, where cloud removal becomes essential considering the complex terrain and unique snow characteristics of the Tibetan Plateau. This paper proposes a novel Multi-Scale Attention-based Cloud Removal Model (MATT). The model integrates global and local information by incorporating multi-scale attention mechanisms and local interaction modules, enhancing the contextual semantic relationships and improving the robustness of feature representation. To improve the segmentation accuracy of cloud- and snow-covered regions, a cloud mask is introduced in the local-attention module, combined with the local interaction module to modulate and reconstruct fine-grained details. This enables the simultaneous representation of both fine-grained and coarse-grained features at the same level. With the help of multi-scale fusion modules and selective attention modules, MATT demonstrates excellent performance on both the Sen2_MTC_New and XZ_Sen2_Dataset datasets. Particularly on the XZ_Sen2_Dataset, it achieves outstanding results: PSNR = 29.095, SSIM = 0.897, FID = 125.328, and LPIPS = 0.356. The model shows strong cloud removal capabilities in cloud- and snow-covered areas in mountainous regions while effectively preserving snow information, and providing significant support for snow cover change studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Enhanced cloud removal via temporal U-Net and cloud cover evolution simulation
- Author
-
Qingwei Tong, Leiguang Wang, Qinling Dai, Chen Zheng, and Fangrong Zhou
- Subjects
Remote sensing image ,Cloud removal ,Cloud cover evolution (CCE) module ,Temporal U-Net ,Residual learning ,Medicine ,Science - Abstract
Abstract Remote sensing images are indispensable for continuous environmental monitoring and Earth observations. However, cloud occlusion can severely degrade image quality, posing a significant challenge for the accurate extraction of ground information. Existing cloud removal techniques often suffer from incomplete cloud removal, artifacts, and color distortions. Owing to the scarcity of sequential data, the effective utilization of temporal information to enhance cloud removal performance poses a challenge. Therefore, we propose a cloud removal method based on cloud evolution simulation. This method is applicable to all paired cloud datasets, enabling the construction of cloud evolution time-series in the absence of actual temporal information. We embed temporal information from the sequence into the Temporal U-Net to achieve more accurate cloud predictions. We conducted extensive experiments on RICE and T-CLOUD datasets. The results demonstrate that our approach significantly improves the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) compared with existing methods.
- Published
- 2025
- Full Text
- View/download PDF
9. SSGT: Spatiospectral Guided Transformer for Hyperspectral Image Fusion Joint With Cloud Removal
- Author
-
Chenxi Du, Jiajun Xiao, Jie Li, Yi Liu, Jiang He, and Qiangqiang Yuan
- Subjects
Cloud removal ,dual-branch encoder–decoder ,hyperspectral image (HSI) ,transformer ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
The hyperspectral image (HSI) has poor spatial resolution and is vulnerable to cloud, which limits its application in practical tasks. However, current hyperspectral and multispectral image fusion algorithms do not consider the cloud contamination problem. In view of this situation, we propose a novel dual-branch fusion framework to improve spatial resolution of HSI while restoring the details of areas covered by clouds. The network extracts multiscale and multilevel spatiospectral features through the dual-branch encoder–decoder structure to focus on local information, and acquires the nonlocal similar relationship by transformer to aggregate global information of features for better reconstruction. To produce a more natural image, the spatiospectral gradient loss with L1 loss of reconstructed image is introduced to guide network training. It can be seen from the cloud-free and cloud-contaminated experiment results that the proposed method achieves the best results in both visual and accuracy evaluation, compared with the recent state of the arts.
- Published
- 2025
- Full Text
- View/download PDF
10. Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques.
- Author
-
Adugna, Tesfaye, Xu, Wenbo, Fan, Jinlong, Luo, Xin, and Jia, Haitao
- Subjects
- *
CLOUDINESS , *LAND cover , *MACHINE learning , *PRODUCT quality , *PIXELS - Abstract
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels' values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Remote sensing image cloud removal based on multi-scale spatial information perception.
- Author
-
Dou, Aozhe, Hao, Yang, Liu, Weifeng, Li, Liangliang, Wang, Zhenzhong, and Liu, Baodi
- Abstract
Remote sensing imagery is indispensable in diverse domains, including geographic information systems, climate monitoring, agricultural planning, and disaster management. Nonetheless, cloud cover can drastically degrade the utility and quality of these images. Current deep learning-based cloud removal methods rely on convolutional neural networks to extract features at the same scale, which can overlook detailed and global information, resulting in suboptimal cloud removal performance. To overcome these challenges, we develop a method for cloud removal that leverages multi-scale spatial information perception. Our technique employs convolution kernels of various sizes, enabling the integration of both global semantic information and local detail information. An attention mechanism enhances this process by targeting key areas within the images, and dynamically adjusting channel weights to improve feature reconstruction. We compared our method with current popular cloud removal methods across three datasets, and the results show that our proposed method improves metrics such as PSNR, SSIM, and cosine similarity, verifying the effectiveness of our method in cloud removal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Training-free thick cloud removal for Sentinel-2 imagery using value propagation interpolation.
- Author
-
Arp, Laurens, Hoos, Holger, van Bodegom, Peter, Francis, Alistair, Wheeler, James, van Laar, Dean, and Baratchi, Mitra
- Subjects
- *
CLIMATE change models , *VEGETATION monitoring , *ARTIFICIAL intelligence , *COMMONS , *CLOUDINESS - Abstract
Remote sensing imagery has an ever-increasing impact on important downstream applications, such as vegetation monitoring and climate change modelling. Clouds obscuring parts of the images create a substantial bottleneck in most machine learning tasks that use remote sensing data, and being robust to this issue is an important technical challenge. In many cases, cloudy images cannot be used in a machine learning pipeline, leading to either the removal of the images altogether, or to using suboptimal solutions reliant on recent cloud-free imagery or the availability of pre-trained models for the exact use case. In this work, we propose VPint2, a cloud removal method built upon the VPint algorithm, an easy-to-apply data-driven spatial interpolation method requiring no prior training, to address the problem of cloud removal. This method leverages previously sensed cloud-free images to represent the spatial structure of a region, which is then used to propagate up-to-date information from non-cloudy pixels to cloudy ones. We also created a benchmark dataset called SEN2-MSI-T, composed of 20 scenes with 5 full-sized images each, belonging to five common land cover classes. We used this dataset to evaluate our method against three alternatives: mosaicking, an AutoML-based regression method, and the nearest similar pixel interpolator. Additionally, we compared against two previously published neural network-based methods on SEN2-MSI-T, and evaluate our method on a subset of the popular SEN12MS-CR-TS benchmark dataset. The methods are compared using several performance metrics, including the structural similarity index, mean absolute error, and error rates on a downstream NDVI derivation task. Our experimental results show that VPint2 performed significantly better than competing methods over 20 experimental conditions, improving performance by 2.4% to 34.3% depending on the condition. We also found that the performance of VPint2 only decreases marginally as the temporal distance of its reference image increases, and that, unlike typical interpolation methods, the performance of VPint2 remains strong for larger percentages of cloud cover. Our findings furthermore support a cloud removal evaluation approach founded on the transfer of cloud masks over the use of cloud-free previous acquisitions as ground truth. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Beyond clouds: Seamless flood mapping using Harmonized Landsat and Sentinel-2 time series imagery and water occurrence data.
- Author
-
Li, Zhiwei, Xu, Shaofen, and Weng, Qihao
- Subjects
- *
MARKOV random fields , *BODIES of water , *EMERGENCY management , *NATURAL disasters , *ARTIFICIAL satellites , *SYNTHETIC aperture radar - Abstract
Floods are among the most devastating natural disasters, posing significant risks to life, property, and infrastructure globally. Earth observation satellites provide data for continuous and extensive flood monitoring, yet limitations exist in the spatial completeness of monitoring using optical images due to cloud cover. Recent studies have developed gap-filling methods for reconstructing cloud-covered areas in water maps. However, these methods are not tailored for and validated in cloudy and rainy flooding scenarios with rapid water extent changes and limited clear-sky observations, leaving room for further improvements. This study investigated and developed a novel reconstruction method for time series flood extent mapping, supporting spatially seamless monitoring of flood extents. The proposed method first identified surface water from time series images using a fine-tuned large foundation model. Then, the cloud-covered areas in the water maps were reconstructed, adhering to the introduced submaximal stability assumption, on the basis of the prior water occurrence data in the Global Surface Water dataset. The reconstructed time series water maps were refined through spatiotemporal Markov random field modeling for the final delineation of flooding areas. The effectiveness of the proposed method was evaluated with Harmonized Landsat and Sentinel-2 datasets under varying cloud cover conditions, enabling seamless flood mapping at 2–3-day frequency and 30 m resolution. Experiments at four global sites confirmed the superiority of the proposed method. It achieved higher reconstruction accuracy with average F1-scores of 0.931 during floods and 0.903 before/after floods, outperforming the typical gap-filling method with average F1-scores of 0.871 and 0.772, respectively. Additionally, the maximum flood extent maps and flood duration maps, which were composed on the basis of the reconstructed water maps, were more accurate than those using the original cloud-contaminated water maps. The benefits of synthetic aperture radar images (e.g., Sentinel-1) for enhancing flood mapping under cloud cover conditions were also discussed. The method proposed in this paper provided an effective way for flood monitoring in cloudy and rainy scenarios, supporting emergency response and disaster management. The code and datasets used in this study have been made available online (https://github.com/dr-lizhiwei/SeamlessFloodMapper). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Deep neural networks for removing clouds and nebulae from satellite images.
- Author
-
Glazyrina, Natalya, Muratkhan, Raikhan, Eslyamov, Serik, Murzabekova, Gulden, Aziyeva, Nurgul, Rysbekkyzy, Bakhytgul, Orynbayeva, Ainur, and Baktiyarova, Nazira
- Subjects
ARTIFICIAL neural networks ,GENERATIVE adversarial networks ,REMOTE-sensing images ,NATURAL resources management ,REMOTE sensing ,DEEP learning - Abstract
This research paper delves into contemporary methodologies for eradicating clouds and nebulae from space images utilizing advanced deep learning technologies such as conditional generative adversarial networks (conditional GAN), cyclic generative adversarial networks (CycleGAN), and spaceattention generative adversarial networks (space-attention GAN). Cloud cover presents a significant obstacle in remote sensing, impeding accurate data analysis across various domains including environmental monitoring and natural resource management. The proposed techniques offer novel solutions by leveraging spatial attention mechanisms to identify and subsequently eliminate clouds from images, thus uncovering previously concealed information and enhancing the quality of space data. The study emphasizes the necessity for further research aimed at refining cloud removal algorithms to accommodate diverse detection conditions and enhancing the overall efficiency of deep learning in satellite image processing. By highlighting potential benefits and advocating for ongoing exploration, the paper underscores the importance of advancing cloud removal techniques to improve data quality and unlock new applications in Earth remote sensing. In conclusion, the proposed approaches hold promise in addressing the persistent challenge of cloud cover in space imagery, paving the way for more accurate data analysis and future advancements in remote sensing technologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A Lightweight Machine-Learning Method for Cloud Removal in Remote Sensing Images Constrained by Conditional Information.
- Author
-
Zhang, Wenyi, Zhang, Haoran, Zhang, Xisheng, Shen, Xiaohua, and Zou, Lejun
- Subjects
- *
IMAGE reconstruction , *DEEP learning , *MACHINE learning , *REMOTE sensing , *PIXELS - Abstract
Reconstructing cloud-covered regions in remote sensing (RS) images holds great promise for continuous ground object monitoring. A novel lightweight machine-learning method for cloud removal constrained by conditional information (SMLP-CR) is proposed. SMLP-CR constructs a multilayer perceptron with a presingle-connection layer (SMLP) based on multisource conditional information. The method employs multi-scale mean filtering and local neighborhood sampling to gain spatial information while also taking into account multi-spectral and multi-temporal information as well as pixel similarity. Meanwhile, the feature importance from the SMLP provides a selection order for conditional information—homologous images are prioritized over images from the same season as the restoration image, and images with close temporal distances rank last. The results of comparative experiments indicate that SMLP-CR shows apparent advantages in terms of visual naturalness, texture continuity, and quantitative metrics. Moreover, compared with popular deep-learning methods, SMLP-CR samples locally around cloud pixels instead of requiring a large cloud-free training area, so the samples show stronger correlations with the missing data, which demonstrates universality and superiority. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A curvature-driven cloud removal method for remote sensing images
- Author
-
Xiaoyu Yu, Jun Pan, Mi Wang, and Jiangong Xu
- Subjects
Cloud removal ,curvature domain ,boundary optimization ,checkpoints refinement ,Mathematical geography. Cartography ,GA1-1776 ,Geodesy ,QB275-343 - Abstract
Cloud coverage has become a significant factor affecting the availability of remote-sensing images in many applications. To mitigate the adverse impact of cloud coverage and recover ground information obscured by clouds, this paper presents a curvature-driven cloud removal method. Considering that each image can be regarded as a curved surface and the curvature can reflect the texture information well due to its dependence on the surface’s undulation degree, the presented method transforms image from natural domain to curvature domain for information reconstruction to maintain details of reference image. In order to improve the overall consistency and continuity of cloud removal results, the optimal boundary for cloud coverage area replacement is determined first to make the boundary pass through pixels with minimum curvature difference. Then, the curvature of missing area is reconstructed based on the curvature of reference image, and the reconstructed curvature is inversely transformed to natural domain to obtain a cloud-free image. In addition, considering the possible significant radiometric differences between different images, the initial cloud-free result will be further refined based on specific checkpoints to improve the local accuracy. To evaluate the performance of the proposed method, both simulated experiments and real data experiments are carried out. Experimental results show that the proposed method can achieve satisfactory results in terms of radiometric accuracy and consistency.
- Published
- 2024
- Full Text
- View/download PDF
17. Multi-Stage Frequency Attention Network for Progressive Optical Remote Sensing Cloud Removal.
- Author
-
Wu, Caifeng, Xu, Feng, Li, Xin, Wang, Xinyuan, Xu, Zhennan, Fang, Yiwei, and Lyu, Xin
- Subjects
- *
OPTICAL remote sensing , *STANDARD deviations , *IMAGE reconstruction , *DEEP learning , *SIGNAL-to-noise ratio - Abstract
Cloud contamination significantly impairs optical remote sensing images (RSIs), reducing their utility for Earth observation. The traditional cloud removal techniques, often reliant on deep learning, generally aim for holistic image reconstruction, which may inadvertently alter the intrinsic qualities of cloud-free areas, leading to image distortions. To address this issue, we propose a multi-stage frequency attention network (MFCRNet), a progressive paradigm for optical RSI cloud removal. MFCRNet hierarchically deploys frequency cloud removal modules (FCRMs) to refine the cloud edges while preserving the original characteristics of the non-cloud regions in the frequency domain. Specifically, the FCRM begins with a frequency attention block (FAB) that transforms the features into the frequency domain, enhancing the differentiation between cloud-covered and cloud-free regions. Moreover, a non-local attention block (NAB) is employed to augment and disseminate contextual information effectively. Furthermore, we introduce a collaborative loss function that amalgamates semantic, boundary, and frequency-domain information. The experimental results on the RICE1, RICE2, and T-Cloud datasets demonstrate that MFCRNet surpasses the contemporary models, achieving superior performance in terms of mean absolute error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM), validating its efficacy regarding the cloud removal from optical RSIs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. RFE-VCR: Reference-enhanced transformer for remote sensing video cloud removal.
- Author
-
Jin, Xianyu, He, Jiang, Xiao, Yi, Lihe, Ziyang, Liao, Xusi, Li, Jie, and Yuan, Qiangqiang
- Subjects
- *
SURFACE of the earth , *VIDEOS - Abstract
As a novel data source for earth observation, satellite video can provide large-scale temporal information for dynamic monitoring. However, the cloud occlusion prevents satellite video from continuous and seamless observation of the earth's surface. We propose the first satellite video cloud removal model RFE-VCR to approach this problem. In RFE-VCR, an efficient strategy of taking distant frames into training period is applied. A reference enhance block based on gated aggregation layers is proposed to explore the complementary information hidden in distant frames. A bidirectional local enhance block using deformable convolution is improved for feature refinement. Moreover, a decoupled temporal-spatial transformer is utilized for long-distance dependence modeling. Simulative and real experiments on Jilin-1 satellite videos demonstrate that our proposed network can achieve remarkable performance in video cloud removal task, as well as sensitive object hiding and high-reflection removal. More dynamic results of our experiments can be found at https://xyjin99.github.io/RFE-VCR/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. CloudSeg: A multi-modal learning framework for robust land cover mapping under cloudy conditions.
- Author
-
Xu, Fang, Shi, Yilei, Yang, Wen, Xia, Gui-Song, and Zhu, Xiao Xiang
- Subjects
- *
LAND cover , *SURFACE of the earth , *SYNTHETIC aperture radar , *CLOUDINESS , *IMAGE analysis - Abstract
Cloud coverage poses a significant challenge to optical image interpretation, degrading ground information on Earth's surface. Synthetic aperture radar (SAR), with its ability to penetrate clouds, provides supplementary information to optical data. However, existing optical-SAR fusion methods predominantly focus on cloud-free scenarios, neglecting the practical challenge of semantic segmentation under cloudy conditions. To tackle this issue, we propose CloudSeg, a novel framework tailored for land cover mapping in the presence of clouds. It addresses the challenges posed by cloud cover from two aspects: reducing semantic ambiguity in areas of the cloudy image that are obscured by clouds and enhancing effective information in the unobstructed portions. Specifically, CloudSeg employs a multi-task learning strategy to simultaneously handle low-level visual task and high-level semantic understanding task, mitigating the semantic ambiguity caused by cloud cover by acquiring discriminative features through an auxiliary cloud removal task. Additionally, CloudSeg incorporates a knowledge distillation strategy, which utilizes the knowledge learned by the teacher network under cloud-free conditions to guide the student network to overcome the interference of cloud-covered areas, enhancing the valuable information from the unobstructed parts of cloud-covered images. Extensive experiments conducted on two datasets, M3M-CR and WHU-OPT-SAR , demonstrate the effectiveness and superiority of the proposed CloudSeg method for land cover mapping under cloudy conditions. Specifically, CloudSeg outperforms the state-of-the-art competitors by 3.16% in terms of mIoU on M3M-CR and by 5.56% on WHU-OPT-SAR , highlighting its substantial advantages for analyzing regions frequently obscured by clouds. Codes are available at https://github.com/xufangchn/CloudSeg. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. 结合U-Net和STGAN的多时相遥感图像云去除算法.
- Author
-
王, 卓, 马, 骏, 郭, 毅, 周, 川杰, 柏, 彬, and 李, 峰
- Subjects
MACHINE learning ,IMAGE reconstruction ,CLOUDINESS ,REMOTE sensing ,DEEP learning - Abstract
Copyright of Journal of Remote Sensing is the property of Editorial Office of Journal of Remote Sensing & Science Publishing Co. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
21. A curvature-driven cloud removal method for remote sensing images.
- Author
-
Yu, Xiaoyu, Pan, Jun, Wang, Mi, and Xu, Jiangong
- Subjects
REMOTE-sensing images ,CURVED surfaces ,REMOTE sensing ,CURVATURE ,PIXELS ,RADIOMETRY - Abstract
Cloud coverage has become a significant factor affecting the availability of remote-sensing images in many applications. To mitigate the adverse impact of cloud coverage and recover ground information obscured by clouds, this paper presents a curvature-driven cloud removal method. Considering that each image can be regarded as a curved surface and the curvature can reflect the texture information well due to its dependence on the surface's undulation degree, the presented method transforms image from natural domain to curvature domain for information reconstruction to maintain details of reference image. In order to improve the overall consistency and continuity of cloud removal results, the optimal boundary for cloud coverage area replacement is determined first to make the boundary pass through pixels with minimum curvature difference. Then, the curvature of missing area is reconstructed based on the curvature of reference image, and the reconstructed curvature is inversely transformed to natural domain to obtain a cloud-free image. In addition, considering the possible significant radiometric differences between different images, the initial cloud-free result will be further refined based on specific checkpoints to improve the local accuracy. To evaluate the performance of the proposed method, both simulated experiments and real data experiments are carried out. Experimental results show that the proposed method can achieve satisfactory results in terms of radiometric accuracy and consistency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Variational-Based Spatial–Temporal Approximation of Images in Remote Sensing.
- Author
-
Amirfakhrian, Majid and Samavati, Faramarz F.
- Subjects
- *
REMOTE sensing , *STANDARD deviations , *REMOTE-sensing images , *IMAGE analysis , *VECTOR fields , *CLOUDINESS - Abstract
Cloud cover and shadows often hinder the accurate analysis of satellite images, impacting various applications, such as digital farming, land monitoring, environmental assessment, and urban planning. This paper presents a new approach to enhancing cloud-contaminated satellite images using a novel variational model for approximating the combination of the temporal and spatial components of satellite imagery. Leveraging this model, we derive two spatial-temporal methods containing an algorithm that computes the missing or contaminated data in cloudy images using the seamless Poisson blending method. In the first method, we extend the Poisson blending method to compute the spatial-temporal approximation. The pixel-wise temporal approximation is used as a guiding vector field for Poisson blending. In the second method, we use the rate of change in the temporal domain to divide the missing region into low-variation and high-variation sub-regions to better guide Poisson blending. In our second method, we provide a more general case by introducing a variation-based method that considers the temporal variation in specific regions to further refine the spatial–temporal approximation. The proposed methods have the same complexity as conventional methods, which is linear in the number of pixels in the region of interest. Our comprehensive evaluation demonstrates the effectiveness of the proposed methods through quantitative metrics, including the Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Metric (SSIM), revealing significant improvements over existing approaches. Additionally, the evaluations offer insights into how to choose between our first and second methods for specific scenarios. This consideration takes into account the temporal and spatial resolutions, as well as the scale and extent of the missing data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. REMOVING CLOUDINESS ON OPTICAL SPACE IMAGES BY A GENERATIVE ADVERSARIAL NETWORK MODEL USING SAR IMAGES.
- Author
-
Romanchuk, Mykola, Zavada, Andrii, Naumchak, Olena, Naumchak, Leonid, and Kosheva, Iryna
- Subjects
GENERATIVE adversarial networks ,SYNTHETIC aperture radar ,OPTICAL radar ,SURFACE of the earth ,IMAGE reconstruction - Abstract
The object of this study is the process of removing cloudiness on optical space images. Solving the cloudiness removal task is an important stage in processing data from the Earth remote probing (ERP) aimed at reconstructing the information hidden by these atmospheric disturbances. The analyzed shortcomings in the fusion of purely optical data led to the conclusion that the best solution to the cloudiness removal problem is a combination of optical and radar data. Compared to conventional methods of image processing, neural networks could provide more efficient and better performance indicators due to the ability to adapt to different conditions and types of images. As a result, a generative adversarial network (GAN) model with cyclic-sequential 7-ResNeXt block architecture was constructed for cloud removal in optical space imagery using synthetic aperture radar (SAR) imagery. The model built generates fewer artifacts when transforming the image compared to other models that process multi-temporal images. The experimental results on the SEN12MS-CR data set demonstrate the ability of the constructed model to remove dense clouds from simultaneous Sentinel-2 space images. This is confirmed by the pixel reconstruction of all multispectral channels with an average RMSE value of 2.4 %. To increase the informativeness of the neural network during model training, a SAR image with a C-band signal is used, which has a longer wavelength and thereby provides medium-resolution data about the geometric structure of the Earth's surface. Applying this model could make it possible to improve the situational awareness at all levels of control over the Armed Forces (AF) of Ukraine through the use of current space observations of the Earth from various ERP systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Imagery Time Series Cloud Removal and Classification Using Long Short Term Memory Neural Networks.
- Author
-
Alonso-Sarria, Francisco, Valdivieso-Ros, Carmen, and Gomariz-Castillo, Francisco
- Subjects
- *
RECURRENT neural networks , *LAND cover , *SATELLITE-based remote sensing , *RANDOM forest algorithms , *REMOTE-sensing images , *LONG short-term memory - Abstract
The availability of high spatial and temporal resolution imagery, such as that provided by the Sentinel satellites, allows the use of image time series to classify land cover. Recurrent neural networks (RNNs) are a clear candidate for such an approach; however, the presence of clouds poses a difficulty. In this paper, random forest (RF) and RNNs are used to reconstruct cloud-covered pixels using data from other next in time images instead of pixels in the same image. Additionally, two RNN architectures are tested to classify land cover from the series, treating reflectivities as time series and also treating spectral signatures as time series. The results are compared with an RF classification. The results for cloud removal show a high accuracy with a maximum RMSE of 0.057 for RNN and 0.038 for RF over all images and bands analysed. In terms of classification, the RNN model obtained higher accuracy (over 0.92 in the test data for the best hyperparameter combinations) than the RF model (0.905). However, the temporal–spectral model accuracies did not reach 0.9 in any case. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Two-Level Supervised Network for Small Ship Target Detection in Shallow Thin Cloud-Covered Optical Satellite Images
- Author
-
Fangjian Liu, Fengyi Zhang, Mi Wang, and Qizhi Xu
- Subjects
ship detection ,cloud removal ,double-layer supervised network ,object detection ,optical satellite images ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Ship detection under cloudy and foggy conditions is a significant challenge in remote sensing satellite applications, as cloud cover often reduces contrast between targets and backgrounds. Additionally, ships are small and affected by noise, making them difficult to detect. This paper proposes a Cloud Removal and Target Detection (CRTD) network to detect small ships in images with thin cloud cover. The process begins with a Thin Cloud Removal (TCR) module for image preprocessing. The preprocessed data are then fed into a Small Target Detection (STD) module. To improve target–background contrast, we introduce a Target Enhancement module. The TCR and STD modules are integrated through a dual-stage supervision network, which hierarchically processes the detection task to enhance data quality, minimizing the impact of thin clouds. Experiments on the GaoFen-4 satellite dataset show that the proposed method outperforms existing detectors, achieving an average precision (AP) of 88.9%.
- Published
- 2024
- Full Text
- View/download PDF
26. A New Sparse Collaborative Low-Rank Prior Knowledge Representation for Thick Cloud Removal in Remote Sensing Images.
- Author
-
Sun, Dong-Lin, Ji, Teng-Yu, and Ding, Meng
- Subjects
- *
KNOWLEDGE representation (Information theory) , *PRIOR learning , *REMOTE sensing - Abstract
Efficiently removing clouds from remote sensing imagery presents a significant challenge, yet it is crucial for a variety of applications. This paper introduces a novel sparse function, named the tri-fiber-wise sparse function, meticulously engineered for the targeted tasks of cloud detection and removal. This function is adept at capturing cloud characteristics across three dimensions, leveraging the sparsity of mode-1, -2, and -3 fibers simultaneously to achieve precise cloud detection. By incorporating the concept of tensor multi-rank, which describes the global correlation, we have developed a tri-fiber-wise sparse-based model that excels in both detecting and eliminating clouds from images. Furthermore, to ensure that the cloud-free information accurately matches the corresponding areas in the observed data, we have enhanced our model with an extended box-constraint strategy. The experiments showcase the notable success of the proposed method in cloud removal. This highlights its potential and utility in enhancing the accuracy of remote sensing imagery. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Cloud Removal of Full-Disk Solar Hα Images Based on RPix2PixHD.
- Author
-
Ma, Ying, Song, Wei, Sun, Haoying, Liu, Xiangchun, and Lin, Ganghua
- Abstract
Clouds in the sky can significantly affect full-disk observations of the Sun. In cloud-covered full-disk H α images, certain solar features become obscured, posing challenges for further solar research. Obtaining both cloud-covered and corresponding cloud-free images is often challenging, resulting in poor alignment of image pairs in the dataset, which adversely affects the performance of cloud removal models. We use RPix2PixHD, a novel network designed to translate cloud-covered images into cloud-free ones while mitigating the effects of misaligned data on the model. RPix2PixHD comprises two main components, Pix2PixHD and RegNet. Pix2PixHD includes a multiresolution generator and a multiscale discriminator. The generator takes cloud-covered images as input to produce cloud-free images. RegNet computes a deformation field using the generated cloud-free images and the ground truth cloud-free images. This deformation field is then used to resample the generated cloud-free images, resulting in registered images. The correction loss is calculated based on these registered images and utilized for training the generator, thereby enhancing the model’s cloud removal effectiveness. We conducted cloud removal experiments on full-disk H α images obtained from the Huairou Solar Observing Station (HSOS). The experimental results demonstrate that RPix2PixHD effectively removes clouds from cloud-covered solar H α images, successfully restoring solar feature details and outperforming comparative methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. SCT-CR: A synergistic convolution-transformer modeling method using SAR-optical data fusion for cloud removal
- Author
-
Jianshen Ma, Yumin Chen, Jun Pan, Jiangong Xu, Zhanghui Li, Rui Xu, and Ruoxuan Chen
- Subjects
Cloud removal ,Data fusion ,SAR ,Synergistic Convolution ,Transformer ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Traditional CNNs struggle with SAR and optical image fusion cloud removal due to SAR image noise, feature space differences and random cloud distribution. This often leads to blurred results with less texture information. This paper proposes a synergistic convolution-transformer cloud removal method (SCT-CR), which is based on a specially designed synergistic convolution module that enables the synergistic fusion of SAR and optical imagery. The proposed network employs a transformer module in the high-dimensional section to better perceive the contextual information of the image and achieve intelligent extraction of global image features. The proposed SCT-CR network successfully addresses the problem of image blur in generated images and makes full use of the texture information present in SAR images. The SCT-CR model is tested on the spectral properties and recovery of visual effects. The experimental results on public datasets SEN12MS-CR and LuojiaSET-OSFCR show that the proposed model has stable and optimal performance. On the SEN12MS-CR dataset, the proposed model improves the SSIM metrics by 15.7 %, 10.2 %, 4.9 %, and 0.5 % compared to the SAR2OPT, SarOptcAGN, DSen2-CR, and GLF-CR models, respectively. On the LuojiaSET-OSFCR dataset, it was improved by 20.0 %, 10.0 %, 6.6 %, and 1.9 %, respectively.
- Published
- 2024
- Full Text
- View/download PDF
29. MCGFE-CR: Cloud Removal With Multiscale Context-Guided Feature Enhancement Network
- Author
-
Qiang Bie and Xiaojie Su
- Subjects
SAR ,optical imagery ,cloud-free images ,cloud removal ,image reconstruction ,Transformer ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Optical remote sensing imagery is often contaminated by clouds and cloud shadows, leading to the loss of ground information and limiting the application of optical images in fields such as change detection and object classification. Therefore, the removal of clouds and cloud shadows is one of the important tasks in the processing of optical remote sensing imagery. Currently, cloud removal methods with better performance are mainly based on Convolutional Neural Networks (CNNs). However, they fail to capture global context information, resulting in the loss of global context features in image reconstruction. The underlying architecture of Transformer networks is the attention mechanism, which can better capture global context features. Inspired by this, we propose a Multi-Scale Context-Guided Feature Enhancement Cloud Removal Network (MCGFE-CR), which can directly reconstruct cloud-free images from SAR and multi-cloud optical imagery. In MCGFE-CR, we embed a Multi-Scale Context-Attention Guidance (MSCAG) block, which can guide global and local context information at multiple scales into cloudy optical images. To enhance the global structural features after fusion and reduce the impact of SAR speckle noise, we incorporate a Residual Block with Channel Attention (RBCA). The network was trained and cloud removal was performed on a global dataset, and it was compared with the Hierarchical Spectral and Structural Preservation Fusion Network (HS2P), Deep Residual Neural Network and SAR-Optical Data Fusion Network (DSen2-CR), Global-Local Fusion Enhanced SAR Cloud Removal Network (GLF-CR), and Generative Adversarial Network for SAR Image to Optical Image Cloud Removal (GAN-CR). The method showed significant improvements in Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), Spectral Angle Mapper (SAM), and Structural Similarity Index (SSIM). The MAE and RMSE were reduced by up to 0.0056 and 0.0129, respectively, while PSNR, SAM, and SSIM were increased by up to 3.8004, 3.2118, and 0.0385, respectively. The experimental results demonstrate that this method has higher spectral fidelity and richer structural texture information in reconstructing various types of ground information and optical images with different cloud coverage areas.
- Published
- 2024
- Full Text
- View/download PDF
30. Dense NDVI Time Series by Fusion of Optical and SAR-Derived Data
- Author
-
Thomas Rosberg and Michael Schmitt
- Subjects
Cloud removal ,data fusion ,deep learning ,gap filling ,recurrent neural network (RNN) ,vegetation monitoring ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Gaps in normalized difference vegetation index (NDVI) time series resulting from frequent cloud cover pose significant challenges in remote sensing for various applications, such as agricultural monitoring or forest disturbance detection. This study introduces a novel method to generate dense NDVI time series without these gaps, enhancing the reliability and application range of NDVI time series. We combine Sentinel-2 NDVI time series containing cloud-induced gaps with NDVI time series derived from the Sentinel-1 synthetic aperture radar sensor using a gated recurrent unit, a variant of recurrent neural networks. To train and evaluate the model, we use data from 1206 regions around the world, comprising approximately 283 000 Sentinel-1 and Sentinel-2 images, collected between September 2019 and April 2021. The proposed approach demonstrates excellent performance with a very low mean absolute error of 0.0478, effectively filling even long-lasting gaps while being applicable globally. Thus, our method holds significant promise for improving the efficiency of numerous downstream applications previously limited by cloud-induced gaps.
- Published
- 2024
- Full Text
- View/download PDF
31. CERMF-Net: A SAR-Optical Feature Fusion for Cloud Elimination From Sentinel-2 Imagery Using Residual Multiscale Dilated Network
- Author
-
Jayakrishnan Anandakrishnan, Venkatesan M Sundaram, and Prabhavathy Paneer
- Subjects
Cloud removal ,data fusion ,multiscale convolutional neural network (CNN) ,Sentinel-2 ,synthetic aperture radar (SAR) ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Satellite-based Earth observation activities, such as urban and agricultural land monitoring, change detection, and disaster management, are constrained by adequate spatial and temporal ground observations. The presence of aerosols and clouds usually distorts quality ground optical observations and reduces the temporal resolution, which degrades the learning and extraction of valuable information. The uncertainty in the occurrence of clouds in the Earth's atmosphere and the possible land changes in subsequent temporal visits is the major challenge in cloud-free reconstruction problems. Advancements in deep learning enabled learning from multisensory inputs, and cloud removal problem seek helps from auxiliary information for better reconstruction. This research introduces a synthetic aperture radar (SAR) guided feature Fusion for Cloud Elimination from Sentinel-2 multispectral imagery using Residual Multiscale dilated Network (CERMF-Net). The proposed CERMF-Net fuses SAR with Sentinel-2 optical data and learn spatial–temporal dependencies and physical–geometrical properties for effective cloud removal. The generalizability and robustness of CERMF-Net are tested against the SEN12MS-CR dataset, a global real cloud-removal dataset. The CERMF-Net displayed superior performance in comparison with the state-of-the-art techniques.
- Published
- 2024
- Full Text
- View/download PDF
32. Feature enhancement network for cloud removal in optical images by fusing with SAR images.
- Author
-
Duan, Chenxi, Belgiu, Mariana, and Stein, Alfred
- Subjects
- *
OPTICAL images , *SYNTHETIC aperture radar , *REMOTE-sensing images , *REMOTE sensing , *OPTICAL remote sensing , *IMAGE analysis , *TASK analysis - Abstract
Presence of cloud-covered pixels is inevitable in optical remote-sensing images. Therefore, the reconstruction of the cloud-covered details is important to improve the usage of these images for subsequent image analysis tasks. Aiming to tackle the issue of high computational resource requirements that hinder the application at scale, this paper proposes a Feature Enhancement Network(FENet) for removing clouds in satellite images by fusing Synthetic Aperture Radar (SAR) and optical images. The proposed network consists of designed Feature Aggregation Residual Block (FAResblock) and Feature Enhancement Block (FEBlock). FENet is evaluated on the publicly available SEN12MS-CR dataset and it achieves promising results compared to the benchmark and the state-of-the-art methods in terms of both visual quality and quantitative evaluation metrics. It proved that the proposed feature enhancement network is an effective solution for satellite image cloud removal using less computational and time consumption. The proposed network has the potential for practical applications in the field of remote sensing due to its effectiveness and efficiency. The developed code and trained model will be available at . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Blind single-image-based thin cloud removal using a cloud perception integrated fast Fourier convolutional network.
- Author
-
Guo, Yujun, He, Wei, Xia, Yu, and Zhang, Hongyan
- Subjects
- *
REMOTE sensing , *LANDSAT satellites , *PRIOR learning , *SPECTROGRAMS - Abstract
Remote sensing images are frequently contaminated by clouds that often degrade the performance of subsequent applications. Cloud removal, therefore, is a standard step in remote sensing image preprocessing, and single-image-based thin cloud removal is a well-established area of research. Existing single-image-based thin cloud removal methods however, lack the capacity for simultaneous executions of efficient long-range modeling and physical attribute consideration. To extend this work and fill this gap, a novel blind single-image-based thin cloud removal method, called cloud perception integrated fast Fourier convolutional network (CP-FFCN), was designed and implemented. The CP-FFCN consists of two modules: the cloud perception module (CPM) and a fast Fourier convolution (FFC)-conducted reconstruction module (FFCN). The CPM uses a frequency spatial attention mechanism to realize long-range modeling of clouds, globally detect them in the cloudy image. It helps the CP-FFCN remove the clouds without external prior knowledge of the cloud distribution. The reconstruction module was designed with an FFC-conducted U-Net architecture to recover the clean images from cloudy scenarios, guided by the locations of clouds as detected by the CPM. In addition, the FFC blocks deployed in the encoder and decoder components in the U-Net architecture selectively learn the attributes of clouds and fogs from the frequency spectrograms to remove the clouds and reconstruct the underlying ground objects. The CP-FFCN selectively learns the frequency features for adequate cloud separation and at the same time efficiently models the long-range information for comprehensive scenario reconstruction with the help of these two modules. We adopted the Google Earth data and Landsat-8 imagery to train the CP-FFCN model and evaluate it on simulated and naturally occurring cloudy scenarios. The visual outcomes illustrate that the proposed CP-FFCN successfully removes thin and small-scale thick clouds with complex ground object scenarios, without external cloud masks and additional reference data. The quantitative analyses further demonstrate the higher effectiveness of the CP-FFCN when compared with several other state-of-the-art thin cloud removal methods, yielding a PSNR value over 39.24 and a SSIM value over 0.98 on the Landsat 8 images. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. A Cloud Coverage Image Reconstruction Approach for Remote Sensing of Temperature and Vegetation in Amazon Rainforest.
- Author
-
Bezerra, Emili, Mafalda, Salomão, Alvarez, Ana Beatriz, Uman-Flores, Diego Armando, Perez-Torres, William Isaac, and Palomino-Quispe, Facundo
- Subjects
REMOTE sensing ,LAND surface temperature ,NORMALIZED difference vegetation index ,STANDARD deviations ,RAIN forests ,REMOTE-sensing images ,IMAGE reconstruction - Abstract
Remote sensing involves actions to obtain information about an area located on Earth. In the Amazon region, the presence of clouds is a common occurrence, and the visualization of important terrestrial information in the image, like vegetation and temperature, can be difficult. In order to estimate land surface temperature (LST) and the normalized difference vegetation index (NDVI) from satellite images with cloud coverage, the inpainting approach will be applied to remove clouds and restore the image of the removed region. This paper proposes the use of the neural network LaMa (large mask inpainting) and the scalable model named Big LaMa for the automatic reconstruction process in satellite images. Experiments are conducted on Landsat-8 satellite images of the Amazon rainforest in the state of Acre, Brazil. To evaluate the architecture's accuracy, the RMSE (root mean squared error), SSIM (structural similarity index) and PSNR (peak signal-to-noise ratio) metrics were used. The LST and NDVI of the reconstructed image were calculated and compared qualitatively and quantitatively, using scatter plots and the chosen metrics, respectively. The experimental results show that the Big LaMa architecture performs more effectively and robustly in restoring images in terms of visual quality. And the LaMa network shows minimal superiority for the measured metrics when addressing medium marked areas. When comparing the results achieved in NDVI and LST of the reconstructed images with real cloud coverage, great visual results were obtained with Big LaMa. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. 'Seeing' Beneath the Clouds—Machine‐Learning‐Based Reconstruction of North African Dust Plumes
- Author
-
Franz Kanngießer and Stephanie Fiedler
- Subjects
mineral dust ,North Africa ,MSG SEVIRI ,machine learning ,cloud removal ,satellite remote sensing ,Geology ,QE1-996.5 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Abstract Mineral dust is one of the most abundant atmospheric aerosol species and has various far‐reaching effects on the climate system and adverse impacts on air quality. Satellite observations can provide spatio‐temporal information on dust emission and transport pathways. However, satellite observations of dust plumes are frequently obscured by clouds. We use a method based on established, machine‐learning‐based image in‐painting techniques to restore the spatial extent of dust plumes for the first time. We train an artificial neural net (ANN) on modern reanalysis data paired with satellite‐derived cloud masks. The trained ANN is applied to cloud‐masked, gray‐scaled images, which were derived from false color images indicating elevated dust plumes in bright magenta. The images were obtained from the Spinning Enhanced Visible and Infrared Imager instrument onboard the Meteosat Second Generation satellite. We find up to 15% of summertime observations in West Africa and 10% of summertime observations in Nubia by satellite images miss dust plumes due to cloud cover. We use the new dust‐plume data to demonstrate a novel approach for validating spatial patterns of the operational forecasts provided by the World Meteorological Organization Dust Regional Center in Barcelona. The comparison elucidates often similar dust plume patterns in the forecasts and the satellite‐based reconstruction, but once trained, the reconstruction is computationally inexpensive. Our proposed reconstruction provides a new opportunity for validating dust aerosol transport in numerical weather models and Earth system models. It can be adapted to other aerosol species and trace gases.
- Published
- 2024
- Full Text
- View/download PDF
36. Virtual image-based cloud removal for Landsat images
- Author
-
Zhanpeng Wang, Demin Zhou, Xiaojuan Li, Lin Zhu, Huili Gong, and Yinghai Ke
- Subjects
cloud removal ,time-series images ,landsat ,similar pixels ,gap filling ,thick clouds ,Mathematical geography. Cartography ,GA1-1776 ,Environmental sciences ,GE1-350 - Abstract
The inevitable thick cloud contamination in Landsat images has severely limited the usability and applications of these images. Developing cloud removal algorithms has been a hot research topic in recent years. Many previous algorithms used one or multiple cloud-free image(s) in the same area acquired on other date(s) as reference image(s) to reconstruct missing pixel values. However, it remains challenging to determine the optimal reference image(s). In addition, abrupt land cover change can substantially degrade the reconstruction accuracies. To address these issues, we present a new cloud removal algorithm called Virtual Image-based Cloud Removal (VICR). For each cloud region, VICR reconstructs the missing surface reflectance by three steps: virtual image within cloud region construction based on time-series reference images, similar pixel selection using the newly proposed temporally weighted spectral distance (TWSD), and residual image estimation. By establishing two buffer zones around the cloud region, VICR allows automatic selection of the optimal set of time-series reference images. The effectiveness of VICR was validated at four testing sites with different landscapes (i.e. urban, croplands, and wetlands) and land change patterns (i.e. phenological change, abrupt change caused by flooding and tidal inundation), and the performances were compared with mNSPI (modified neighborhood similar pixel interpolator), WLR (weighted linear regression) and ARRC (AutoRegression to Remove Clouds). Experimental results showed that VICR outperformed the other algorithms and achieved higher Correlation Coefficients and lower Root Mean Square Errors in surface reflectance estimation at the four sites. The improvement is particularly noticeable at the sites with abrupt land change. By considering the difference in the contributions from the reference images, TWSD can select more reliable similar pixels to improve the prediction of abrupt change in surface reflectance. Moreover, VICR is more robust to different cloud sizes and to changing reference images. VICR is also computationally much faster than ARRC. The framework for time-series image cloud removal by VICR has great potential to be applied for large datasets processing.
- Published
- 2023
- Full Text
- View/download PDF
37. A comprehensive review of spatial-temporal-spectral information reconstruction techniques
- Author
-
Qunming Wang, Yijie Tang, Yong Ge, Huan Xie, Xiaohua Tong, and Peter M. Atkinson
- Subjects
Spatial reconstruction ,Cloud removal ,Temporal reconstruction ,Spatio-temporal fusion ,Spectral reconstruction ,Spatio-spectral fusion ,Physical geography ,GB3-5030 ,Science - Abstract
Fine spatial resolution remote sensing images are crucial sources of data for monitoring the Earth's surface. Due to defects in sensors and the complicated imaging environment, however, fine spatial resolution images always suffer from various degrees of information loss. According to the basic attributes of remote sensing images, the information loss generally falls into three dimensions, that is, the spatial, temporal and spectral dimensions. In recent decades, many methods have been developed to cope with this information loss problem in the three dimensions, which are termed spatial reconstruction, temporal reconstruction and spectral reconstruction in this paper. This paper presents a comprehensive review of all three types of reconstruction. First, a systematic introduction and review of the achievements is provided, including the refined general mathematical framework and diagram for each of the three parts. Second, the applications in various areas (e.g., meteorology, ecology and environmental science) are introduced. Third, the challenges and recent advances of spatial-temporal-spectral information reconstruction are summarized, such as the efforts for dealing with abrupt land cover changes in spatial reconstruction, inconsistency in multi-scale data acquired by different sensors in temporal reconstruction, and point spread function (PSF) effect in spectral reconstruction. Finally, several thoughts are given for future prospects.
- Published
- 2023
- Full Text
- View/download PDF
38. Combining Gaussian Process Regression with Poisson Blending for Seamless Cloud Removal from Optical Remote Sensing Imagery for Cropland Monitoring.
- Author
-
Park, Soyeon and Park, No-Wook
- Subjects
- *
OPTICAL remote sensing , *KRIGING , *POISSON processes , *POISSON regression , *FARMS , *IMAGE reconstruction - Abstract
Constructing optical image time series for cropland monitoring requires a cloud removal method that accurately restores cloud regions and eliminates discontinuity around cloud boundaries. This paper describes a two-stage hybrid machine learning-based cloud removal method that combines Gaussian process regression (GPR)-based predictions with image blending for seamless optical image reconstruction. GPR is employed in the first stage to generate initial prediction results by quantifying temporal relationships between multi-temporal images. GPR predictive uncertainty is particularly combined with prediction values to utilize uncertainty-weighted predictions as the input for the next stage. In the second stage, Poisson blending is applied to eliminate discontinuity in GPR-based predictions. The benefits of this method are illustrated through cloud removal experiments using Sentinel-2 images with synthetic cloud masks over two cropland sites. The proposed method was able to maintain the structural features and quality of the underlying reflectance in cloud regions and outperformed two existing hybrid cloud removal methods for all spectral bands. Furthermore, it demonstrated the best performance in predicting several vegetation indices in cloud regions. These experimental results indicate the benefits of the proposed cloud removal method for reconstructing cloud-contaminated optical imagery. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Cloud removal using SAR and optical images via attention mechanism-based GAN.
- Author
-
Zhang, Shuai, Li, Xiaodi, Zhou, Xingyu, Wang, Yuning, and Hu, Yue
- Subjects
- *
OPTICAL remote sensing , *OPTICAL images , *SYNTHETIC aperture radar , *GENERATIVE adversarial networks , *REMOTE sensing - Abstract
Clouds often appear in remote sensing images, which seriously affect the application of remote sensing images. Therefore, cloud removal is an important preprocessing process in remote sensing image applications. In this paper, we propose a generative adversarial network-based cloud removal method for optical remote sensing images with the assistance of synthetic aperture radar (SAR) images. Our model is an end-to-end model, which consists of a translation module, an attention module, a generator, and a discriminator. We introduce the attention mechanism to accurately locate the cloud regions. With the obtained attention maps as the prior information, the proposed method can remove the clouds while preserving the cloud-free regions. In addition, we include the structural similarity index (SSIM) and the attention penalty in the loss function to improve the performance of the proposed method. Numerical experiments show that the proposed model provides improved cloud removal performance compared with the state-of-the-art methods. • An end-to-end GAN-based network for thick cloud removal in optical images. • SAR image is converted into auxiliary optical image by the translation module. • Attention map obtained to accurately locate regions contaminated by clouds. • The attention mechanism makes the generator focus on recovering the cloudy regions. • Proposal greatly improves cloud removal accuracy compared with competing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Reconstruction of Snow Cover in Kaidu River Basin via Snow Grain Size Gap-Filling Based on Machine Learning.
- Author
-
Zhu, Linglong, Ma, Guangyi, Zhang, Yonghong, Wang, Jiangeng, and Kan, Xi
- Subjects
SNOW cover ,WATERSHEDS ,GRAIN size ,MACHINE learning ,WATER management ,CLOUDINESS - Abstract
Fine spatiotemporal resolution snow monitoring at the watershed scale is crucial for the management of snow water resources. This research proposes a cloud removal algorithm via snow grain size (SGS) gap-filling based on a space–time extra tree, which aims to address the issue of cloud occlusion that limits the coverage and time resolution of long-time series snow products. To fully characterize the geomorphic characteristics and snow duration time of the Kaidu River Basin (KRB), we designed dimensional data that incorporate spatiotemporal information. Combining other geographic and snow phenological information as input for estimating SGS. A spatiotemporal extreme tree model was constructed and trained to simulate the nonlinear mapping relationship between multidimensional inputs and SGS. The estimation results of SGS can characterize the snow cover under clouds. This study found that when the cloud cover is less than 70%, the model's estimation of SGS meets expectations, and snow cover reconstruction achieves good results. In specific cloud removal cases, compared to traditional spatiotemporal filtering and multi-sensor fusion, the proposed method has better detail characterization ability and exhibits better performance in snow cover reconstruction and cloud removal in complex mountainous environments. Overall, from 2000 to 2020, 66.75% of snow products successfully removed cloud coverage. This resulted in a decrease in the annual average cloud coverage rate from 52.46% to 34.41% when compared with the MOD10A1 snow product. Additionally, there was an increase in snow coverage rate from 21.52% to 33.84%. This improvement in cloud removal greatly enhanced the time resolution of snow cover data without compromising the accuracy of snow identification. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. SatelliteCloudGenerator: Controllable Cloud and Shadow Synthesis for Multi-Spectral Optical Satellite Images.
- Author
-
Czerkawski, Mikolaj, Atkinson, Robert, Michie, Craig, and Tachtatzis, Christos
- Subjects
- *
REMOTE-sensing images , *OPTICAL images , *SURFACE of the earth , *MULTISPECTRAL imaging , *CLOUDINESS , *ARTIFICIAL satellites - Abstract
Optical satellite images of Earth frequently contain cloud cover and shadows. This requires processing pipelines to recognize the presence, location, and features of the cloud-affected regions. Models that make predictions about the ground behind the clouds face the challenge of lacking ground truth information, i.e., the exact state of Earth's surface. Currently, the solution to that is to either (i) create pairs from samples acquired at different times or (ii) simulate cloudy data based on a clear acquisition. This work follows the second approach and proposes an open-source simulation tool capable of generating a diverse and unlimited number of high-quality simulated pair data with controllable parameters to adjust cloud appearance, with no annotation cost. The tool is available as open-source. An indication of the quality and utility of the generated clouds is demonstrated by the models for cloud detection and cloud removal trained exclusively on simulated data, which approach the performance of their equivalents trained on real data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Short Time Cloud-Free Image Reconstruction Based on Time Series Images
- Author
-
Guanhua Zhou, Chen Tian, Chunyue Niu, Guifei Jing, Haoyu Miao, and Zhifeng Li
- Subjects
Cloud detection ,cloud removal ,data fusion ,image reconstruction ,time series analysis ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
A cost-effective method for cloud removal and reconstruction of remote sensing images has been a long-standing research challenge in remote sensing data processing. In this article, we address this challenge by presenting a fast and simple method for cloud detection and cloud-free image reconstruction using Landsat-8 OLI and Sentinel-2 MSI series data. The proposed method utilizes the spectral difference between cloud pixels and transparent pixels to develop a fast and simple cloud detection algorithm for multispectral remote sensing sensors. Subsequently, the cloud-free data of the complementary image and the target image undergo histogram matching, and the cloud-free pixels of the complementary image are seamlessly integrated into the original image, leading to the generation of a cloud-free image reconstruction algorithm. Comparative analysis between the results obtained from our proposed method and the corresponding artificial results reveals an accuracy rate exceeding 90% and a high consistency in the reconstructed spatial spectrum. By addressing the need for cost-effective cloud removal and image reconstruction, our method contributes to the advancement of remote sensing data processing and applications.
- Published
- 2023
- Full Text
- View/download PDF
43. Cloud-EGAN: Rethinking CycleGAN From a Feature Enhancement Perspective for Cloud Removal by Combining CNN and Transformer
- Author
-
Xianping Ma, Yiming Huang, Xiaokang Zhang, Man-On Pun, and Bo Huang
- Subjects
Cloud removal ,cycle-consistent generative adversarial network (CycleGAN) ,feature enhancement ,remote sensing images ,transformer ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Cloud cover presents a major challenge for geoscience research of remote sensing images with thick clouds causing complete obstruction with information loss while thin clouds blurring the ground objects. Deep learning (DL) methods based on convolutional neural networks (CNNs) have recently been introduced to the cloud removal task. However, their performance is hindered by their weak capabilities in contextual information extraction and aggregation. Unfortunately, such capabilities play a vital role in characterizing remote sensing images with complex ground objects. In this work, the conventional cycle-consistent generative adversarial network (CycleGAN) is revitalized from a feature enhancement perspective. More specifically, a saliency enhancement (SE) module is first designed to replace the original CNN module in CycleGAN to re-calibrate channel attention weights to capture detailed information for multi-level feature maps. Furthermore, a high-level feature enhancement (HFE) module is developed to generate contextualized cloud-free features while suppressing cloud components. In particular, HFE is composed of both CNN- and transformer-based modules. The former enhances the local high-level features by employing residual learning and multi-scale strategies, while the latter captures the long-range contextual dependencies with the Swin transformer module to exploit high-level information from a global perspective. Capitalizing on the SE and HFE modules, an effective Cloud-Enhancement GAN, namely Cloud-EGAN, is proposed to accomplish thin and thick cloud removal tasks. Extensive experiments on the RICE and the WHUS2-CR datasets confirm the impressive performance of Cloud-EGAN.
- Published
- 2023
- Full Text
- View/download PDF
44. Multidiscriminator Supervision-Based Dual-Stream Interactive Network for High-Fidelity Cloud Removal on Multitemporal SAR and Optical Images.
- Author
-
Wang, Zhenfei, Liu, Qiang, Meng, Xiangchao, and Jin, Wei
- Abstract
Optical remote sensing images have the advantages in clear visual characteristics and strong interpretability. Unfortunately, cloud coverage limits the quality and availability of optical images in practical applications. In contrast, synthetic aperture radar (SAR) images provide all-day and all-weather imaging, which can serve as effective auxiliary information for cloud removal. Existing cloud removal methods are difficult to obtain high-fidelity cloud-free results due to the insufficient spectral and spatial information exploration in the multitemporal SAR and optical images. In this letter, we propose a multidiscriminator supervision-based dual-stream interactive network (MDS-DIN) for cloud removal. Specifically, we first design a dual-stream interactive learning module to take full advantage of the complementary information between multitemporal SAR and optical images. Moreover, we specially design an adaptive weight fusion module (AWFM) to adaptively allocate fusion weights to the dual-stream results by considering the discriminative features in spectral and spatial levels. In addition, multidiscriminator is used to jointly optimize overall networks for high-fidelity cloud removal. Experiments on simulated and real datasets demonstrate the competitive performance of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Edge-SAR-Assisted Multimodal Fusion for Enhanced Cloud Removal.
- Author
-
Wen, Zhenyu, Suo, Jiahui, Su, Jie, Li, Bingning, and Zhou, Yejian
- Abstract
In Earth observation activities, cloud severely affects the interpretation of high-resolution imagery, generated by optical satellites. Therefore, removing clouds from optical imagery becomes a topic of interest in the remote sensing field. Currently, most methods use auxiliary synthetic aperture radar (SAR) images to reconstruct optical images by merging SAR and optical images into a deep learning network. However, the speckle noise of the SAR image is not taken into consideration during feature fusion processing, leading to blurry edges in the reconstructed optical images. To get fine-grained optical images, we propose a novel cloud removal framework based on the edge fusion of SAR and optical images. First, the edge feature of SAR images is extracted by the GRHED. As the prior knowledge, it can provide fine-grained edge information for subsequent reconstruction work. Then channels from three modal data are stacked to guide the reconstruction of optical images by exploiting their correlations and interactions. Furthermore, a structural similarity (SSIM) loss function is introduced to optimize the training network and improve the coherence of the image structure. Experimental results confirm its advantages on the SEN12MS-CR dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Cloud Removal in Remote Sensing Using Sequential-Based Diffusion Models.
- Author
-
Zhao, Xiaohu and Jia, Kebin
- Subjects
- *
SPACE-based radar , *PROBABILISTIC generative models , *SYNTHETIC aperture radar , *REMOTE-sensing images , *REMOTE sensing - Abstract
The majority of the optical observations collected via spaceborne optical satellites are corrupted by clouds or haze, restraining further applications of Earth observation; thus, exploring an ideal method for cloud removal is of great concern. In this paper, we propose a novel probabilistic generative model named sequential-based diffusion models (SeqDMs) for the cloud-removal task in a remote sensing domain. The proposed method consists of multi-modal diffusion models (MmDMs) and a sequential-based training and inference strategy (SeqTIS). In particular, MmDMs is a novel diffusion model that reconstructs the reverse process of denosing diffusion probabilistic models (DDPMs) to integrate additional information from auxiliary modalities (e.g., synthetic aperture radar robust to the corruption of clouds) to help the distribution learning of main modality (i.e., optical satellite imagery). In order to consider the information across time, SeqTIS is designed to integrate temporal information across an arbitrary length of both the main modality and auxiliary modality input sequences without retraining the model again. With the help of MmDMs and SeqTIS, SeqDMs have the flexibility to handle an arbitrary length of input sequences, producing significant improvements only with one or two additional input samples and greatly reducing the time cost of model retraining. We evaluate our method on a public real-world dataset SEN12MS-CR-TS for a multi-modal and multi-temporal cloud-removal task. Our extensive experiments and ablation studies demonstrate the superiority of the proposed method on the quality of the reconstructed samples and the flexibility to handle arbitrary length sequences over multiple state-of-the-art cloud removal approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Cloud Removal and Satellite Image Reconstruction Using Deep Learning Based Image Inpainting Approaches
- Author
-
Saxena, Jaya, Jain, Anubha, Krishna, P. Radha, Bothale, Rajashree V., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Rathore, Vijay Singh, editor, Sharma, Subhash Chander, editor, Tavares, Joao Manuel R.S., editor, Moreira, Catarina, editor, and Surendiran, B., editor
- Published
- 2022
- Full Text
- View/download PDF
48. Deep Learning for Satellite Image Reconstruction
- Author
-
Saxena, Jaya, Jain, Anubha, Krishna, P. Radha, Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, Dua, Mohit, editor, Jain, Ankit Kumar, editor, Yadav, Anupam, editor, Kumar, Nitin, editor, and Siarry, Patrick, editor
- Published
- 2022
- Full Text
- View/download PDF
49. Deep Learning-Based Approach for Satellite Image Reconstruction Using Handcrafted Prior
- Author
-
Saxena, Jaya, Jain, Anubha, Radha Krishna, Pisipati, Xhafa, Fatos, Series Editor, Smys, S., editor, Bestak, Robert, editor, Palanisamy, Ram, editor, and Kotuliak, Ivan, editor
- Published
- 2022
- Full Text
- View/download PDF
50. Denoising Diffusion Probabilistic Feature-Based Network for Cloud Removal in Sentinel-2 Imagery.
- Author
-
Jing, Ran, Duan, Fuzhou, Lu, Fengxian, Zhang, Miao, and Zhao, Wenji
- Subjects
- *
REMOTE-sensing images , *OPTICAL images , *DEEP learning , *OPTICAL remote sensing , *NETWORK performance , *REMOTE sensing , *INFORMATION retrieval - Abstract
Cloud contamination is a common issue that severely reduces the quality of optical satellite images in remote sensing fields. With the rapid development of deep learning technology, cloud contamination is expected to be addressed. In this paper, we propose Denoising Diffusion Probabilistic Model-Cloud Removal (DDPM-CR), a novel cloud removal network that can effectively remove both thin and thick clouds in optical image scenes. Our network leverages the denoising diffusion probabilistic model (DDPM) architecture to integrate both clouded optical and auxiliary SAR images as input to extract DDPM features, providing significant information for missing information retrieval. Additionally, we propose a cloud removal head adopting the DDPM features with an attention mechanism at multiple scales to remove clouds. To achieve better network performance, we propose a cloud-oriented loss that considers both high- and low-frequency image information as well as cloud regions in the training procedure. Our ablation and comparative experiments demonstrate that the DDPM-CR network outperforms other methods under various cloud conditions, achieving better visual effects and accuracy metrics (MAE = 0.0229, RMSE = 0.0268, PSNR = 31.7712, and SSIM = 0.9033). These results suggest that the DDPM-CR network is a promising solution for retrieving missing information in either thin or thick cloud-covered regions, especially when using auxiliary information such as SAR data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.