438 results on '"Chanussot, Jocelyn"'
Search Results
2. Small Object-Aware Video Coding for Machines via Feature-Motion Synergy.
- Author
-
Xu, Qihan, Xi, Bobo, Xu, Haitao, Huang, Yun, Li, Yunsong, and Chanussot, Jocelyn
- Abstract
Video coding for machines (VCM) is a rapidly growing field dedicated to bridging the gap between video and feature coding. For storage-intensive aerial videos, VCM offers valuable insights into a more efficient coding paradigm. However, the frequent occurrence of small objects poses a challenge to VCM, with limited distinctive features and inherent distortion in the reconstructed videos. To address this issue, we propose small object-aware VCM (SOAVCM), a joint video and feature coding approach that handles small objects. Particularly, the video coding incorporates a feature-guided residual (FGR) codec to preserve the small objects, utilizing features obtained from feature coding. Simultaneously, feature coding employs the motion vector (MV) estimated in video coding to generate compact high-level features. By leveraging the inherent synergy between features and MVs, SOAVCM significantly enhances overall coding efficiency. Experimental results demonstrate that SOAVCM outperforms several deep-learning-based methods and traditional coding standards in video coding. Moreover, the encoded feature representation improves detection accuracy and achieves substantial bitrate savings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Deep Learning of Radiometrical and Geometrical Sar Distorsions for Image Modality translations
- Author
-
Bralet, Antoine, primary, Atto, Abdourrahmane M., additional, Chanussot, Jocelyn, additional, and TrouvE, Emmanuel, additional
- Published
- 2022
- Full Text
- View/download PDF
4. SUnAA: Sparse Unmixing Using Archetypal Analysis.
- Author
-
Rasti, Behnood, Zouaoui, Alexandre, Mairal, Julien, and Chanussot, Jocelyn
- Abstract
This letter introduces a new sparse unmixing technique using archetypal analysis (SUnAA). First, we design a new model based on archetypal analysis (AA). We assume that the endmembers of interest are a convex combination of endmembers provided by a spectral library and that the number of endmembers of interest is known. Then, we propose a minimization problem. Unlike most conventional sparse unmixing methods, here the minimization problem is nonconvex. We minimize the optimization objective iteratively using an active set algorithm. Our method is robust to the initialization and only requires the number of endmembers of interest. SUnAA is evaluated using two simulated datasets for which results confirm its better performance over other conventional and advanced techniques in terms of signal-to-reconstruction error (SRE). SUnAA is also applied to Cuprite dataset and the results are compared visually with the available geological map provided for this dataset. The qualitative assessment demonstrates the successful estimation of the minerals abundances and significantly improves the detection of dominant minerals compared to the conventional regression-based sparse unmixing methods. The Python implementation of SUnAA can be found at: https://github.com/BehnoodRasti/SUnAA. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Remote Sensing Image Fusion With Task-Inspired Multiscale Nonlocal-Attention Network.
- Author
-
Liu, Na, Li, Wei, Sun, Xian, Tao, Ran, and Chanussot, Jocelyn
- Abstract
Recently, convolutional neural networks (CNNs) have been developed for remote sensing image fusion (RSIF). To obtain competitive fusion performance, network design becomes more complicated by stacking convolutional layers deeper and wider. However, problems still remain when applying the existing networks in practical applications. On the one hand, researchers focus on improving spatial resolution but ignore that the fused images will be used in subsequent interpretation applications, e.g., objection detection. On the other hand, RSIF involves different tasks with different image sources, e.g., pansharpening of the panchromatic and multispectral image (MSI), hypersharpening of the panchromatic and hyperspectral image (HSI), and so on. However, the existing networks only solve one of them, failing to be compatible with other tasks. To address the above problems, a convenient task-inspired multiscale nonlocal-attention network (MNAN) is proposed for RSIF. The proposed MNAN focuses more on enhancing the multiscale targets in the scene when improving the resolution of the fused image. In addition, the proposed network can be applied to both pansharpening and hypersharpening tasks without any modification. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Self Supervised Learning for Few Shot Hyperspectral Image Classification
- Author
-
Ait Ali Braham, Nassim, Mou, LiChao, Chanussot, Jocelyn, Mairal, Julien, and Zhu, Xiao Xiang
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Deep Learning ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Vision and Pattern Recognition (cs.CV) ,Self Supervised Learning ,Computer Science - Computer Vision and Pattern Recognition ,Hyperspectral Image classification ,Machine Learning (cs.LG) - Abstract
Deep learning has proven to be a very effective approach for Hyperspectral Image (HSI) classification. However, deep neural networks require large annotated datasets to generalize well. This limits the applicability of deep learning for HSI classification, where manually labelling thousands of pixels for every scene is impractical. In this paper, we propose to leverage Self Supervised Learning (SSL) for HSI classification. We show that by pre-training an encoder on unlabeled pixels using Barlow-Twins, a state-of-the-art SSL algorithm, we can obtain accurate models with a handful of labels. Experimental results demonstrate that this approach significantly outperforms vanilla supervised learning., Comment: Accepted in IGARSS 2022
- Published
- 2022
7. Spectral-Spatial Transformer for Hyperspectral Image Sharpening
- Author
-
Chen, Lihui, primary, Vivone, Gemine, additional, Qin, Jiayi, additional, Chanussot, Jocelyn, additional, and Yang, Xiaomin, additional
- Published
- 2022
- Full Text
- View/download PDF
8. Self Supervised Learning for Few Shot Hyperspectral Image Classification
- Author
-
Braham, Nassim Ait Ali, primary, Mou, Lichao, additional, Chanussot, Jocelyn, additional, Mairal, Julien, additional, and Zhu, Xiao Xiang, additional
- Published
- 2022
- Full Text
- View/download PDF
9. Multimodal Hyperspectral Unmixing via Attention Networks
- Author
-
Han, Zhu, primary, Hong, Danfeng, additional, Gao, Lianru, additional, Yao, Jing, additional, Zhang, Bing, additional, and Chanussot, Jocelyn, additional
- Published
- 2022
- Full Text
- View/download PDF
10. Deep Blind Unmixing using Minimum Simplex Convolutional Network
- Author
-
Rasti, Behnood, primary, Koirala, Bikram, additional, Scheunders, Paul, additional, and Chanussot, Jocelyn, additional
- Published
- 2022
- Full Text
- View/download PDF
11. Enhanced Single-Shot Detector for Small Object Detection in Remote Sensing Images
- Author
-
Shamsolmoali, Pourya, primary, Zareapoor, Masoumeh, additional, Yang, Jie, additional, Granger, Eric, additional, and Chanussot, Jocelyn, additional
- Published
- 2022
- Full Text
- View/download PDF
12. Multimodal Remote Sensing Benchmark Datasets for Land Cover Classification
- Author
-
Yao, Jing, primary, Hong, Danfeng, additional, Gao, Lianru, additional, and Chanussot, Jocelyn, additional
- Published
- 2022
- Full Text
- View/download PDF
13. Robust Linear Unmixing for Hyperspectral Remote Sensing Imagery Based on Enhanced Constraint of Classification
- Author
-
Chi, Jinxue, primary, Shen, Xueji, additional, Yu, Haoyang, additional, Shang, Xiaodi, additional, Chanussot, Jocelyn, additional, and Shi, Yimin, additional
- Published
- 2022
- Full Text
- View/download PDF
14. UIU-Net: U-Net in U-Net for Infrared Small Object Detection.
- Author
-
Wu, Xin, Hong, Danfeng, and Chanussot, Jocelyn
- Subjects
OBJECT recognition (Computer vision) ,INFRARED imaging ,MINIATURE objects ,FEATURE extraction ,GLOBAL method of teaching ,SOFTWARE maintenance - Abstract
Learning-based infrared small object detection methods currently rely heavily on the classification backbone network. This tends to result in tiny object loss and feature distinguishability limitations as the network depth increases. Furthermore, small objects in infrared images are frequently emerged bright and dark, posing severe demands for obtaining precise object contrast information. For this reason, we in this paper propose a simple and effective “U-Net in U-Net” framework, UIU-Net for short, and detect small objects in infrared images. As the name suggests, UIU-Net embeds a tiny U-Net into a larger U-Net backbone, enabling the multi-level and multi-scale representation learning of objects. Moreover, UIU-Net can be trained from scratch, and the learned features can enhance global and local contrast information effectively. More specifically, the UIU-Net model is divided into two modules: the resolution-maintenance deep supervision (RM-DS) module and the interactive-cross attention (IC-A) module. RM-DS integrates Residual U-blocks into a deep supervision network to generate deep multi-scale resolution-maintenance features while learning global context information. Further, IC-A encodes the local context information between the low-level details and high-level semantic features. Extensive experiments conducted on two infrared single-frame image datasets, i.e., SIRST and Synthetic datasets, show the effectiveness and superiority of the proposed UIU-Net in comparison with several state-of-the-art infrared small object detection methods. The proposed UIU-Net also produces powerful generalization performance for video sequence infrared small object datasets, e.g., ATR ground/air video sequence dataset. The codes of this work are available openly at https://github.com/danfenghong/IEEE [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Multimodal Convolutional Neural Networks with Cross-Channel Reconstruction
- Author
-
Hong, Danfeng, primary, Wu, Xin, additional, Yao, Jing, additional, Gao, Lianru, additional, Zhang, Bing, additional, and Chanussot, Jocelyn, additional
- Published
- 2021
- Full Text
- View/download PDF
16. EvoNAS: Evolvable Neural Architecture Search for Hyperspectral Unmixing
- Author
-
Han, Zhu, primary, Hong, Danfeng, additional, Gao, Lianru, additional, Chanussot, Jocelyn, additional, and Zhang, Bing, additional
- Published
- 2021
- Full Text
- View/download PDF
17. On Hyperspectral Super-Resolution
- Author
-
Chanussot, Jocelyn, primary
- Published
- 2021
- Full Text
- View/download PDF
18. Wavelet-Based Block Low-Rank Representations for Hyperspectral Denoising
- Author
-
Zhao, Bin, primary, Sveinsson, Johannes R., additional, Ulfarsson, Magnus O., additional, and Chanussot, Jocelyn, additional
- Published
- 2021
- Full Text
- View/download PDF
19. Non-Local Means Low-Rank Approximation for Hyperspectral Denoising
- Author
-
Zhao, Bin, primary, Sveinsson, Johannes R., additional, Ulfarsson, Magnus O., additional, and Chanussot, Jocelyn, additional
- Published
- 2021
- Full Text
- View/download PDF
20. An Overview of Multimodal Remote Sensing Data Fusion: From Image to Feature, From Shallow to Deep
- Author
-
Hong, Danfeng, primary, Chanussot, Jocelyn, additional, and Zhu, Xiao Xiang, additional
- Published
- 2021
- Full Text
- View/download PDF
21. Hyperspectral Image Super-Resolution via Deep Spatiospectral Attention Convolutional Neural Networks.
- Author
-
Hu, Jin-Fan, Huang, Ting-Zhu, Deng, Liang-Jian, Jiang, Tai-Xiang, Vivone, Gemine, and Chanussot, Jocelyn
- Subjects
CONVOLUTIONAL neural networks ,HIGH resolution imaging ,DEEP learning ,MULTISPECTRAL imaging ,SPATIAL resolution ,ERROR functions - Abstract
Hyperspectral images (HSIs) are of crucial importance in order to better understand features from a large number of spectral channels. Restricted by its inner imaging mechanism, the spatial resolution is often limited for HSIs. To alleviate this issue, in this work, we propose a simple and efficient architecture of deep convolutional neural networks to fuse a low-resolution HSI (LR-HSI) and a high-resolution multispectral image (HR-MSI), yielding a high-resolution HSI (HR-HSI). The network is designed to preserve both spatial and spectral information thanks to a new architecture based on: 1) the use of the LR-HSI at the HR-MSI’s scale to get an output with satisfied spectral preservation and 2) the application of the attention and pixelShuffle modules to extract information, aiming to output high-quality spatial details. Finally, a plain mean squared error loss function is used to measure the performance during the training. Extensive experiments demonstrate that the proposed network architecture achieves the best performance (both qualitatively and quantitatively) compared with recent state-of-the-art HSI super-resolution approaches. Moreover, other significant advantages can be pointed out by the use of the proposed approach, such as a better network generalization ability, a limited computational burden, and the robustness with respect to the number of training samples. Please find the source code and pretrained models from https://liangjiandeng.github.io/Projects_Res/HSRnet_2021tnnls.html. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Unsupervised Outlier Detection Using Memory and Contrastive Learning.
- Author
-
Huyan, Ning, Quan, Dou, Zhang, Xiangrong, Liang, Xuefeng, Chanussot, Jocelyn, and Jiao, Licheng
- Subjects
OUTLIER detection ,DEEP learning ,LEARNING modules ,MEMORY ,LEARNING ,IMAGE reconstruction ,FEATURE extraction - Abstract
Outlier detection is to separate anomalous data from inliers in the dataset. Recently, the most deep learning methods of outlier detection leverage an auxiliary reconstruction task by assuming that outliers are more difficult to recover than normal samples (inliers). However, it is not always true in deep auto-encoder (AE) based models. The auto-encoder based detectors may recover certain outliers even if outliers are not in the training data, because they do not constrain the feature learning. Instead, we think outlier detection can be done in the feature space by measuring the distance between outliers’ features and the consistency feature of inliers. To achieve this, we propose an unsupervised outlier detection method using a memory module and a contrastive learning module (MCOD). The memory module constrains the consistency of features, which merely represent the normal data. The contrastive learning module learns more discriminative features, which boosts the distinction between outliers and inliers. Extensive experiments on four benchmark datasets show that our proposed MCOD performs well and outperforms eleven state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Few-Shot Learning With Class-Covariance Metric for Hyperspectral Image Classification.
- Author
-
Xi, Bobo, Li, Jiaojiao, Li, Yunsong, Song, Rui, Hong, Danfeng, and Chanussot, Jocelyn
- Subjects
IMAGE recognition (Computer vision) ,FEATURE extraction ,MACHINE learning ,EUCLIDEAN metric ,DEEP learning ,COMPUTATIONAL complexity ,MATHEMATICAL convolutions - Abstract
Recently, embedding and metric-based few-shot learning (FSL) has been introduced into hyperspectral image classification (HSIC) and achieved impressive progress. To further enhance the performance with few labeled samples, we in this paper propose a novel FSL framework for HSIC with a class-covariance metric (CMFSL). Overall, the CMFSL learns global class representations for each training episode by interactively using training samples from the base and novel classes, and a synthesis strategy is employed on the novel classes to avoid overfitting. During the meta-training and meta-testing, the class labels are determined directly using the Mahalanobis distance measurement rather than an extra classifier. Benefiting from the task-adapted class-covariance estimations, the CMFSL can construct more flexible decision boundaries than the commonly used Euclidean metric. Additionally, a lightweight cross-scale convolutional network (LXConvNet) consisting of 3D and 2D convolutions is designed to thoroughly exploit the spectral-spatial information in the high-frequency and low-frequency scales with low computational complexity. Furthermore, we devise a spectral-prior-based refinement module (SPRM) in the initial stage of feature extraction, which cannot only force the network to emphasize the most informative bands while suppressing the useless ones, but also alleviate the effects of the domain shift between the base and novel categories to learn a collaborative embedding mapping. Extensive experiment results on four benchmark data sets demonstrate that the proposed CMFSL can outperform the state-of-the-art methods with few-shot annotated samples. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Variable Subpixel Convolution Based Arbitrary-Resolution Hyperspectral Pansharpening.
- Author
-
He, Lin, Xie, Jinhua, Li, Jun, Plaza, Antonio, Chanussot, Jocelyn, and Zhu, Jiawei
- Subjects
CONVOLUTIONAL neural networks ,SPATIAL resolution ,IMAGE registration ,MODELS & modelmaking - Abstract
Standard hyperspectral (HS) pansharpening relies on fusion to enhance low-resolution HS (LRHS) images to the resolution of their matching panchromatic (PAN) images, whose practical implementation is normally under a stipulation of scale invariance of the model across the training phase and the pansharpening phase. By contrast, arbitrary resolution HS (ARHS) pansharpening seeks to pansharpen LRHS images to any user-customized resolutions. For such a new HS pansharpening task, it is not feasible to train and store convolution neural network (CNN) models for all possible candidate scales, which implies that the single model acquired from the training phase should be capable of being generalized to yield HS images with any resolutions in the pansharpening phase. To address the challenge, a novel variable subpixel convolution (VSPC)-based CNN (VSPC-CNN) method following our arbitrary upsampling CNN (AU-CNN) framework is developed for ARHS pansharpening. The VSPC-CNN method comprises a two-stage elevating thread. The first stage is to improve the spatial resolution of the input HS image to that of the PAN image through a prepansharpening module, and then, a VSPC-encapsulated arbitrary scale attention upsampling (ASAU) module is cascaded for arbitrary resolution adjustment. After training with given scales, it can be generalized to pansharpen HS image to arbitrary scales under the spatial patterns invariance across the training and pansharpening phases. Experimental results from several specific VSPC-CNNs on both simulated and real HS datasets show the superiority of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. AutoNAS: Automatic Neural Architecture Search for Hyperspectral Unmixing.
- Author
-
Han, Zhu, Hong, Danfeng, Gao, Lianru, Zhang, Bing, Huang, Min, and Chanussot, Jocelyn
- Subjects
EVOLUTIONARY algorithms ,AFFINE transformations ,WEIGHT training ,DEEP learning ,COMPUTER architecture - Abstract
Due to the powerful and automatic representation capabilities, deep learning (DL) techniques have made significant breakthroughs and progress in hyperspectral unmixing (HU). Among the DL approaches, autoencoders (AEs) have become a widely used and promising network architecture. However, these AE-based methods heavily rely on manual design and may not be a good fit for specific datasets. To unmix hyperspectral images more intelligently, we propose an automatic neural architecture search model for HU, AutoNAS for short, to determine the optimal network architecture by considering channel configurations and convolution kernels simultaneously. In AutoNAS, the self-supervised training mechanism based on hyperspectral images is first designed for generating the training samples of the supernet. Then, the affine parameter sharing strategy is adopted by applying different affine transformations on the supernet weights in the training phase, which enables finding the optimal channel configuration. Furthermore, on the basis of the obtained channel configuration, the evolutionary algorithm with additional computational constraints is introduced into networks to achieve flexible convolution kernel search by evaluating unmixing results of different architectures in the supernet. Extensive experiments conducted on four hyperspectral datasets demonstrate the effectiveness and superiority of the proposed AutoNAS in comparison with several state-of-the-art unmixing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Cluster-Memory Augmented Deep Autoencoder via Optimal Transportation for Hyperspectral Anomaly Detection.
- Author
-
Huyan, Ning, Zhang, Xiangrong, Quan, Dou, Chanussot, Jocelyn, and Jiao, Licheng
- Subjects
ANOMALY detection (Computer security) ,MNEMONICS ,PROBLEM solving ,DETECTORS ,IMAGE reconstruction - Abstract
Hyperspectral anomaly detection (AD) aims to detect objects significantly different from their surrounding background. Recently, many detectors based on autoencoder (AE) exhibited promising performances in hyperspectral AD tasks. However, the fundamental hypothesis of the AE-based detector that anomaly is more challenging to be reconstructed than background may not always be true in practice. We demonstrate that an AE could well reconstruct anomalies even without anomalies for training, because AE models mainly focus on the quality of sample reconstruction and do not care if the encoded features solely represent the background rather than anomalies. If more information is preserved than needed to reconstruct the background, the anomalies will be well reconstructed. This article proposes a cluster-memory augmented deep autoencoder via optimal transportation for hyperspectral anomaly detection (OTCMA) clustering for hyperspectral AD to solve this problem. The deep clustering method based on optimal transportation (OT) is proposed to enhance the features consistency of samples within the same categories and features discrimination of samples in different categories. The memory module stores the background’s consistent features, which are the cluster centers for each category background. We retrieve more consistent features from the memory module instead of reconstructing a sample utilizing its own encoded features. The network focuses more on consistent feature reconstruction by training AE with a memory module. This effectively restricts the reconstruction ability of AE and prevents reconstructing anomalies. Extensive experiments on the benchmark datasets demonstrate that our proposed OTCMA achieves state-of-the-art results. Besides, this article presents further discussions about the effectiveness of our proposed memory module and different criteria for better AD. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. An Optimization Procedure for Robust Regression-Based Pansharpening.
- Author
-
Carpentiero, Marco, Vivone, Gemine, Restaino, Rocco, Addesso, Paolo, and Chanussot, Jocelyn
- Subjects
ROBUST optimization ,MULTISPECTRAL imaging ,IMAGE fusion ,MAXIMUM likelihood statistics ,REMOTE sensing ,MULTISENSOR data fusion - Abstract
Model-based approaches to pansharpening still constitute a class of widely employed methods, thanks to their straightforward applicability to many problems, dispensing the user from time-consuming training phases. The injection scheme based on an accurate estimation (exploiting regression) of the relationship between the details contained in the panchromatic (PAN) image and those required for the enhancement of the multispectral (MS) image represents the most updated approach to this problem, being characterized by both theoretical and practical optimality. We elaborated on this scheme by designing a procedure for estimating the key parameters required for the optimal setting of such a regression-based approach. We tested this approach on several datasets acquired by the WorldView satellites comparing the proposed approach with a benchmark consisting of some state-of-the-art pansharpening methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Multigraph-Based Low-Rank Tensor Approximation for Hyperspectral Image Restoration.
- Author
-
Liu, Na, Li, Wei, Tao, Ran, Du, Qian, and Chanussot, Jocelyn
- Subjects
IMAGE reconstruction ,BURST noise ,SOLAR radiation ,HABITAT suitability index models ,RANDOM noise theory ,LOW-rank matrices ,MULTIGRAPH - Abstract
Low-rank-tensor-approximation(LRTA)-based hyperspectral image and hyperspectral imagery (HSI) restoration has drawn increasing attention. However, most of the methods construct a hidden low-rank tensor by utilizing the nonlocal self-similarity (NLSS) and global spectral correlation (GSC) inherited by HSIs. Although achieving state-of-the-art (SOTA) restoration performance, NLSS and GSC have limitations. NLSS is introduced from natural image denoising to remove spatially independent identically distributed (i.i.d.) Gaussian and impulse noise, while GSC, which is naturally possessed by HSIs, is adopted to maintain the spectral integrity and remove spectrally, i.i.d., degradations. Therefore, NLSS and GSC may not be successfully used for complex HSI restoration tasks, such as destriping, cloud removal, and recovery of atmospheric absorption bands. To solve the issue, borrowing the idea from manifold learning, the geometry information characterized by proximity relationship, is integrated with the LRTA to solve the above issue, named multigraph-based LRTA (MGLRTA). Different from most of the existing methods, the proposed MGLRTA directly models an HSI as a low-rank tensor and efficiently explores the extra proximity information on the defined graphs that are not only inherited by the low-rank constraints but also naturally possessed in HSIs. A well-posed iterative algorithm is designed to solve the restoration problem. Experimental results on different datasets that cover several severe degradation scenarios demonstrate that the proposed MGLRTA outperforms the SOTA HSI restoration methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Hyperspectral and LiDAR Data Classification Using Joint CNNs and Morphological Feature Learning.
- Author
-
Roy, Swalpa Kumar, Deria, Ankur, Hong, Danfeng, Ahmad, Muhammad, Plaza, Antonio, and Chanussot, Jocelyn
- Subjects
OPTICAL radar ,LIDAR ,FEATURE extraction ,CONVOLUTIONAL neural networks ,DEEP learning - Abstract
Convolutional neural networks (CNNs) have been extensively utilized for hyperspectral image (HSI) and light detection and ranging (LiDAR) data classification. However, CNNs have not been much explored for joint HSI and LiDAR image classification. Therefore, this article proposes a joint feature learning (HSI and LiDAR) and fusion mechanism using CNN and spatial morphological blocks, which generates highly accurate land-cover maps. The CNN model comprises three Conv3D layers and is directly applied to the HSIs for extracting discriminative spectral–spatial feature representation. On the contrary, the spatial morphological block is able to capture the information relevant to the height or shape of the different land-cover regions from LiDAR data. The LiDAR features are extracted using morphological dilation and erosion layers that increase the robustness of the proposed model by considering elevation information as an additional feature. Finally, both the obtained features from CNNs and spatial morphological blocks are combined using an additive operation prior to the classification. Extensive experiments are shown with widely used HSIs and LiDAR datasets, i.e., University of Houston (UH), Trento, and MUUFL Gulfport scene. The reported results show that the proposed model significantly outperforms traditional methods and other state-of-the-art deep learning models. The source code for the proposed model will be made available publicly at https://github.com/AnkurDeria/HSI+LiDAR. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing.
- Author
-
Hong, Danfeng, Gao, Lianru, Yao, Jing, Yokoya, Naoto, Chanussot, Jocelyn, Heiden, Uta, and Zhang, Bing
- Subjects
DEEP learning ,LEARNING ability ,INFORMATION modeling ,CONVOLUTIONAL neural networks - Abstract
Over the past decades, enormous efforts have been made to improve the performance of linear or nonlinear mixing models for hyperspectral unmixing (HU), yet their ability to simultaneously generalize various spectral variabilities (SVs) and extract physically meaningful endmembers still remains limited due to the poor ability in data fitting and reconstruction and the sensitivity to various SVs. Inspired by the powerful learning ability of deep learning (DL), we attempt to develop a general DL approach for HU, by fully considering the properties of endmembers extracted from the hyperspectral imagery, called endmember-guided unmixing network (EGU-Net). Beyond the alone autoencoder-like architecture, EGU-Net is a two-stream Siamese deep network, which learns an additional network from the pure or nearly pure endmembers to correct the weights of another unmixing network by sharing network parameters and adding spectrally meaningful constraints (e.g., nonnegativity and sum-to-one) toward a more accurate and interpretable unmixing solution. Furthermore, the resulting general framework is not only limited to pixelwise spectral unmixing but also applicable to spatial information modeling with convolutional operators for spatial–spectral unmixing. Experimental results conducted on three different datasets with the ground truth of abundance maps corresponding to each material demonstrate the effectiveness and superiority of the EGU-Net over state-of-the-art unmixing algorithms. The codes will be available from the website: https://github.com/danfenghong/IEEE_TNNLS_EGU-Net. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Hyperspectral Image Denoising Using Spectral-Spatial Transform-Based Sparse and Low-Rank Representations.
- Author
-
Zhao, Bin, Ulfarsson, Magnus O., Sveinsson, Johannes R., and Chanussot, Jocelyn
- Subjects
IMAGE denoising ,DISCRETE cosine transforms ,DISCRETE wavelet transforms ,SPARSE approximations ,LOW-rank matrices ,RANDOM noise theory ,QUANTITATIVE research ,CONVEX functions - Abstract
This article proposes a denoising method based on sparse spectral–spatial and low-rank representations (SSSLRR) using the 3-D orthogonal transform (3-DOT). SSSLRR can be effectively used to remove the Gaussian and mixed noise. SSSLRR uses 3-DOT to decompose noisy HSI to sparse transform coefficients. The 3-D discrete orthogonal wavelet transform (3-D DWT) is a representative 3-DOT suitable for denoising since it concentrates on the signal in few transform coefficients, and the 3-D discrete orthogonal cosine transform (3-D DCT) is another example. An SSSLRR using 3-D DWT will be called SSSLRR-DWT. SSSLRR-DWT is an iterative algorithm based on the alternating direction method of multipliers (ADMM) that uses sparse and nuclear norm penalties. We use an ablation study to show the effectiveness of the penalties we employ in the method. Both simulated and real hyperspectral datasets demonstrate that SSSLRR outperforms other comparative methods in quantitative and visual assessments to remove the Gaussian and mixed noise. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. NonRegSRNet: A Nonrigid Registration Hyperspectral Super-Resolution Network.
- Author
-
Zheng, Ke, Gao, Lianru, Hong, Danfeng, Zhang, Bing, and Chanussot, Jocelyn
- Subjects
CONVOLUTIONAL neural networks ,IMAGING systems ,SPATIAL resolution ,SPECTRAL imaging ,RECORDING & registration ,REMOTE-sensing images ,SPECTRAL sensitivity - Abstract
Due to the limitations of imaging systems, satellite hyperspectral imagery (HSI), which yields rich spectral information in many channels, often suffers from poor spatial resolution. HSI super-resolution (SR) refers to the fusion of high spatial resolution multispectral imagery (MSI) and low spatial resolution HSI to generate HSI that has both a high spatial and high spectral resolution. However, most existing SR methods assume that the two original images used are perfectly registered: in reality, nonrigid deformation areas can exist locally in the two images even if prior registration of the control points has been carried out. To address this problem, we propose a novel unsupervised spectral unmixing and image deformation correction network—NonRegSRNet—with multimodal and multitask learning that can be used for the joint registration of HSI and MSI and to produce SR imagery. More specifically, NonRegSRNet integrates the dense registration and SR tasks into a unified model that includes a triplet convolutional neural network. This allows these two tasks to complement each other so that better registration and SR results can be achieved. Furthermore, because the point spread function (PSF) and spectral response function (SRF) are often unavailable, two special convolutional layers are designed to adaptively learn the parameters of the PSF and SRF, which makes the proposed model more adaptable. Experimental results demonstrate that the proposed method has the ability to produce highly accurate and stable reconstructed images under complex nonrigid deformation conditions. (Code available at https://github.com/saber-zero/NonRegSRNet) [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Hyperspectral Images Denoising Based on Mixtures of Factor Analyzers
- Author
-
Zhao, Bin, primary, Sveinsson, Johannes R., additional, Ulfarsson, Magnus O., additional, and Chanussot, Jocelyn, additional
- Published
- 2020
- Full Text
- View/download PDF
34. PolSAR Scene Classification via Low-Rank Tensor-Based Multi-View Subspace Representation
- Author
-
Chen, Mengqian, primary, Ren, Bo, additional, Hou, Biao, additional, Chanussot, Jocelyn, additional, Wang, Shuang, additional, Zhang, Xiangrong, additional, and Xie, Wen, additional
- Published
- 2020
- Full Text
- View/download PDF
35. Unsupervised Hyperspectral Embedding by Learning a Deep Regression Network
- Author
-
Hong, Danfeng, primary, Yao, Jing, additional, Chanussot, Jocelyn, additional, and Zhu, Xiao Xiang, additional
- Published
- 2020
- Full Text
- View/download PDF
36. Local Spatial-Spectral Correlation Based Mixtures of Factor Analyzers for Hyperspectral Denoising
- Author
-
Zhao, Bin, primary, Sveinsson, Johannes R., additional, Ulfarsson, Magnus O., additional, and Chanussot, Jocelyn, additional
- Published
- 2020
- Full Text
- View/download PDF
37. Locally Linear Reconstruction for Spectral Enhancement Using Limited Pixel-to-Pixel Multispectral and Hyperspectral Data
- Author
-
Hong, Danfeng, primary, Yao, Jing, additional, Hang, Renlong, additional, and Chanussot, Jocelyn, additional
- Published
- 2020
- Full Text
- View/download PDF
38. CNN-Based Hyperspectral Pansharpening With Arbitrary Resolution.
- Author
-
He, Lin, Zhu, Jiawei, Li, Jun, Plaza, Antonio, Chanussot, Jocelyn, and Yu, Zhuliang
- Subjects
CONVOLUTIONAL neural networks ,SPATIAL resolution - Abstract
Traditional hyperspectral (HS) pansharpening aims at fusing a HS image with its panchromatic (PAN) counterpart, to bring the spatial resolution of the HS image to that of the PAN image. However, in many practical applications, arbitrary resolution HS (ARHS) pansharpening is required, where the HS and PAN images need to be integrated to generate a pansharpened HS image with arbitrary resolution (usually higher than that of the PAN image). Such an innovative task brings forth new challenges for the pansharpening technique, mainly including how to reconstruct HS images beyond the training scale and how to guarantee spectral fidelity at any spatial resolutions. To tackle the challenges, we present a novel convolutional neural network (CNN)-based method for ARHS pansharpening called ARHS-CNN. It is based on a two-step relay optimization process, which is associated with a multilevel enhancement subnetwork and a rescaling subnetwork. With a careful design following the thread, our ARHS-CNN is able to pansharpen HS images to any spatial resolutions using just a single CNN model trained on a limited number of scales while meantime to keep spectral fidelity at those resolutions, which wins an obvious advantage over traditional pansharpening methods. Experimental results obtained on several datasets verify the excellent performance of our ARHS-CNN method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. A Bipartite Graph Partition-Based Coclustering Approach With Graph Nonnegative Matrix Factorization for Large Hyperspectral Images.
- Author
-
Huang, Nan, Xiao, Liang, Xu, Yang, and Chanussot, Jocelyn
- Subjects
MATRIX decomposition ,BIPARTITE graphs ,FACTORIZATION ,LAPLACIAN matrices ,NONNEGATIVE matrices ,UNDIRECTED graphs ,SPARSE matrices ,GEOMETRY - Abstract
Clustering large hyperspectral images (HSIs) is a very challenging problem because large HSIs have high dimensionality, large spectral variability, and large computational and memory consumption. Recently, sparse subspace clustering (SSC) has achieved remarkable success in HSI clustering. However, most SSC-based methods suffer from the following bottlenecks for large HSIs: 1) high computational consumption and memory space during the construction of the similarity matrix and decomposition of the graph Laplacian matrix and 2) failure to capture the relationships among dictionary atoms, sparse coefficients, and hyperspectral pixels. To address these challenges, we propose a novel algorithm that extends SSC to cocluster large HSIs, called bipartite graph partition with graph nonnegative matrix factorization (BGP-GNMF). Specifically, to fully explore the characteristics of the spectral and spatial contexts in HSIs, we propose a novel superpixel and pixel coclustering framework with bipartite graph partitioning in the joint sparse representation domain, where superpixel-based dictionary atoms are defined as disjoint vertex sets of the bipartite graph and the joint sparsity representation is mapped into the adjacency matrix of the undirected bipartite graph. To overcome the challenges of high computational consumption and large memory space for large HSIs, the bipartite graph partition with orthonormal constrained nonnegative matrix factorization is proposed to simultaneously cluster the structured dictionary atoms and hyperspectral pixels with an indicator matrix. Finally, to exploit the intrinsic geometry of HSIs, we incorporate manifold regularization into the bipartite graph partition to improve final clustering accuracy. The effectiveness and efficiency of the proposed method are verified on three classical HSIs, and the experimental results illustrate the superiority of the proposed method compared with other state-of-the-art HSI clustering methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Unsupervised and Unregistered Hyperspectral Image Super-Resolution With Mutual Dirichlet-Net.
- Author
-
Qu, Ying, Qi, Hairong, Kwan, Chiman, Yokoya, Naoto, and Chanussot, Jocelyn
- Subjects
HIGH resolution imaging ,MULTISPECTRAL imaging ,COMPUTER vision ,REMOTE sensing ,SPATIAL resolution ,IMAGE registration - Abstract
Hyperspectral images (HSIs) provide rich spectral information that has contributed to the successful performance improvement of numerous computer vision and remote sensing tasks. However, it can only be achieved at the expense of images’ spatial resolution. HSI super-resolution (HSI-SR), thus, addresses this problem by fusing low-resolution (LR) HSI with the multispectral image (MSI) carrying much higher spatial resolution (HR). Existing HSI-SR approaches require the LR HSI and HR MSI to be well registered, and the reconstruction accuracy of the HR HSI relies heavily on the registration accuracy of different modalities. In this article, we propose an unregistered and unsupervised mutual Dirichlet-Net ($u^{2}$ -MDN) to exploit the uncharted problem domain of HSI-SR without the requirement of multimodality registration. The success of this endeavor would largely facilitate the deployment of HSI-SR since registration requirement is difficult to satisfy in real-world sensing devices. The novelty of this work is threefold. First, to stabilize the fusion procedure of two unregistered modalities, the network is designed to extract spatial information and spectral information of two modalities with different dimensions through a shared encoder–decoder structure. Second, the mutual information (MI) is further adopted to capture the nonlinear statistical dependencies between the representations from two modalities (carrying spatial information) and their raw inputs. By maximizing the MI, spatial correlations between different modalities can be well characterized to further reduce the spectral distortion. We assume that the representations follow a similar Dirichlet distribution for their inherent sum-to-one and nonnegative properties. Third, a collaborative $l_{2,1}$ -norm is employed as the reconstruction error instead of the more common $l_{2}$ -norm to better preserve the spectral information. Extensive experimental results demonstrate the superior performance of $u^{2}$ -MDN as compared to the state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Modality Translation in Remote Sensing Time Series.
- Author
-
Liu, Xun, Hong, Danfeng, Chanussot, Jocelyn, Zhao, Baojun, and Ghamisi, Pedram
- Subjects
REMOTE sensing ,MODAL logic ,SYNTHETIC aperture radar ,AMBIGUITY ,IMAGE analysis ,TRANSLATING & interpreting ,MARKOV random fields - Abstract
Modality translation, which aims to translate images from a source modality to a target one, has attracted a growing interest in the field of remote sensing recently. Compared to translation problems in multimedia applications, modality translation in remote sensing often suffers from inherent ambiguities, i.e., a single input image could correspond to multiple possible outputs, and the results may not be valid in the following image interpretation tasks, such as classification and change detection. To address these issues, we make the attempt to utilizing time-series data to resolve the ambiguities. We propose a novel multimodality image translation framework, which exploits temporal information from two aspects: 1) by introducing a guidance image from given temporally neighboring images in the target modality, we employ a feature mask module and transfer semantic information from temporal images to the output without requiring the use of any semantic labels and 2) while incorporating multiple pairs of images in time series, a temporal constraint is formulated during the learning process in order to guarantee the uniqueness of the prediction result. We also build a multimodal and multitemporal dataset that contains synthetic aperture radar (SAR), visible, and short-wave length infrared band (SWIR) image time series of the same scene to encourage and promote research on modality translation in remote sensing. Experiments are conducted on the dataset for two cross-modality translation tasks (SAR to visible and visible to SWIR). Both qualitative and quantitative results demonstrate the effectiveness and superiority of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Sparsity-Enhanced Convolutional Decomposition: A Novel Tensor-Based Paradigm for Blind Hyperspectral Unmixing.
- Author
-
Yao, Jing, Hong, Danfeng, Xu, Lin, Meng, Deyu, Chanussot, Jocelyn, and Xu, Zongben
- Subjects
BLIND source separation ,SPECTRAL imaging ,SPACE-based radar ,MATHEMATICAL optimization ,MATHEMATICAL regularization - Abstract
Blind hyperspectral unmixing (HU) has long been recognized as a crucial component in analyzing the hyperspectral imagery (HSI) collected by airborne and spaceborne sensors. Due to the highly ill-posed problems of such a blind source separation scheme and the effects of spectral variability in hyperspectral imaging, the ability to accurately and effectively unmixing the complex HSI still remains limited. To this end, this article presents a novel blind HU model, called sparsity-enhanced convolutional decomposition (SeCoDe), by jointly capturing spatial–spectral information of HSI in a tensor-based fashion. SeCoDe benefits from two perspectives. On the one hand, the convolutional operation is employed in SeCoDe to locally model the spatial relation between the targeted pixel and its neighbors, which can be well explained by spectral bundles that are capable of addressing spectral variabilities effectively. It maintains, on the other hand, physically continuous spectral components by decomposing the HSI along with the spectral domain. With sparsity-enhanced regularization, an alternative optimization strategy with alternating direction method of multipliers (ADMM)-based optimization algorithm is devised for efficient model inference. Extensive experiments conducted on three different data sets demonstrate the superiority of the proposed SeCoDe compared to previous state-of-the-art methods. We will also release the code at https://github.com/danfenghong/IEEE%5fTGRS%5fSeCoDe to encourage the reproduction of the given results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Using Low-Rank Representation of Abundance Maps and Nonnegative Tensor Factorization for Hyperspectral Nonlinear Unmixing.
- Author
-
Gao, Lianru, Wang, Zhicheng, Zhuang, Lina, Yu, Haoyang, Zhang, Bing, and Chanussot, Jocelyn
- Subjects
INVERSE problems ,MAPS ,TASK analysis - Abstract
Tensor-based methods have been widely studied to attack inverse problems in hyperspectral imaging since a hyperspectral image (HSI) cube can be naturally represented as a third-order tensor, which can perfectly retain the spatial information in the image. In this article, we extend the linear tensor method to the nonlinear tensor method and propose a nonlinear low-rank tensor unmixing algorithm to solve the generalized bilinear model (GBM). Specifically, the linear and nonlinear parts of the GBM can both be expressed as tensors. Furthermore, the low-rank structures of abundance maps and nonlinear interaction abundance maps are exploited by minimizing their nuclear norm, thus taking full advantage of the high spatial correlation in HSIs. Synthetic and real-data experiments show that the low rank of abundance maps and nonlinear interaction abundance maps exploited in our method can improve the performance of the nonlinear unmixing. A MATLAB demo of this work will be available at https://github.com/LinaZhuang for the sake of reproducibility. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. CyCU-Net: Cycle-Consistency Unmixing Network by Learning Cascaded Autoencoders.
- Author
-
Gao, Lianru, Han, Zhu, Hong, Danfeng, Zhang, Bing, and Chanussot, Jocelyn
- Subjects
CASCADE connections ,DEEP learning ,SELF-perception ,IMAGE reconstruction ,FEATURE extraction - Abstract
In recent years, deep learning (DL) has attracted increasing attention in hyperspectral unmixing (HU) applications due to its powerful learning and data fitting ability. The autoencoder (AE) framework, as an unmixing baseline network, achieves good performance in HU by automatically learning low-dimensional embeddings and reconstructing data. Nevertheless, the conventional AE-based architecture, which focuses more on the pixel-level reconstruction loss, tends to lose some significant detailed information of certain materials (e.g., material-related properties) in the reconstruction process. Therefore, inspired by the perception mechanism, we propose a cycle-consistency unmixing network, called CyCU-Net, by learning two cascaded AEs in an end-to-end fashion, to enhance the unmixing performance more effectively. CyCU-Net is capable of reducing the detailed and material-related information loss in the process of reconstruction by relaxing the original pixel-level reconstruction assumption to cycle consistency dominated by the cascaded AEs. More specifically, cycle consistency can be achieved by a newly proposed self-perception loss, which consists of two spectral reconstruction terms and one abundance reconstruction term. By taking advantage of the self-perception loss in the network, the high-level semantic information can be well preserved in the unmixing process. Moreover, we investigate the performance gain of CyCU-Net with extensive ablation studies. Experimental results on one synthetic and three real hyperspectral data sets demonstrate the effectiveness and competitiveness of the proposed CyCU-Net in comparison with several state-of-the-art unmixing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Individual Tree Segmentation Based on Mean Shift and Crown Shape Model for Temperate Forest.
- Author
-
Tusa, Eduardo, Monnet, Jean-Matthieu, Barre, Jean-Baptiste, Mura, Mauro Dalla, Dalponte, Michele, and Chanussot, Jocelyn
- Abstract
Light detection and ranging (LiDAR) provides high-resolution geometric information for monitoring forests at individual tree crown (ITC) level. An important task for ITC delineation is segmentation, and previous studies showed that the adaptive 3-D mean shift (AMS3D) algorithm provides effective results. AMS3D for ITC segmentation has three components for the kernel profile: shape, weight, and size. In this letter, we present an AMS3D approach based on the adaptation of the kernel profile size through an ellipsoid crown shape model. The algorithm parameters are estimated based on allometry equations derived from 22 forest plots in two study sites. After computing the mean shift (MS) vector, we initialize the parameters of the ellipsoid crown shape model to derive the kernel profile size, and further tested two crown shape models for adapting the size of the superellipsoid (SE) kernel profile. These schemes are compared with two other MS algorithms with and without kernel profile size adaptation. We select the best algorithm output per plot based on the maximum F1-score. The ellipsoid crown shape model with a SE kernel profile of $n = 1.5$ presents the highest recall and the best Jaccard index, especially for conifers. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. Element-Wise Feature Relation Learning Network for Cross-Spectral Image Patch Matching.
- Author
-
Quan, Dou, Wang, Shuang, Huyan, Ning, Chanussot, Jocelyn, Wang, Ruojing, Liang, Xuefeng, Hou, Biao, and Jiao, Licheng
- Subjects
IMAGE registration ,CONVOLUTIONAL neural networks ,SPECTRAL imaging - Abstract
Recently, the majority of successful matching approaches are based on convolutional neural networks, which focus on learning the invariant and discriminative features for individual image patches based on image content. However, the image patch matching task is essentially to predict the matching relationship of patch pairs, that is, matching (similar) or non-matching (dissimilar). Therefore, we consider that the feature relation (FR) learning is more important than individual feature learning for image patch matching problem. Motivated by this, we propose an element-wise FR learning network for image patch matching, which transforms the image patch matching task into an image relationship-based pattern classification problem and dramatically improves generalization performances on image matching. Meanwhile, the proposed element-wise learning methods encourage full interaction between feature information and can naturally learn FR. Moreover, we propose to aggregate FR from multilevels, which integrates the multiscale FR for more precise matching. Experimental results demonstrate that our proposal achieves superior performances on cross-spectral image patch matching and single spectral image patch matching, and good generalization on image patch retrieval. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. A Mutual Information-Based Self-Supervised Learning Model for PolSAR Land Cover Classification.
- Author
-
Ren, Bo, Zhao, Yangyang, Hou, Biao, Chanussot, Jocelyn, and Jiao, Licheng
- Subjects
LAND cover ,DEEP learning ,SYNTHETIC aperture radar ,PIXELS ,SYNTHETIC apertures ,IMPLICIT learning ,CLASSIFICATION ,ELECTRONIC data processing - Abstract
Recently, deep learning methods have attracted much attention in the field of polarimetric synthetic aperture radar (PolSAR) data interpretation and understanding. However, for supervised methods, it requires large-scale labeled data to achieve better performance, and getting enough labeled data is a time-consuming and laborious task. Aiming to obtain a good classification result with limited labeled data, we focus on learning discriminative high-level features between multiple representations, which we call mutual information. As PolSAR data have multi-modal representations, there should have strong similarity between multi-modal features of the same pixel. In addition, each pixel has its own unique geocoding and scattering information. Hence, every pixel has great difference from other pixels in a specific representation space. Based on the above observations, this article proposes a mutual information-based self-supervised learning (MI-SSL) model to learn an implicit representation from unlabeled data. In this article, the self-supervised learning idea is first applied to PolSAR data processing. Furthermore, a reasonable pretext task, which is suitable for PolSAR data, is designed to extract mutual information for classification tasks. Compared with the state-of-the-art classification methods, experimental results on four PolSAR data sets demonstrate that our MI-SSL model produces impressive overall accuracy with fewer labeled data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. Deep Half-Siamese Networks for Hyperspectral Unmixing.
- Author
-
Han, Zhu, Hong, Danfeng, Gao, Lianru, Zhang, Bing, and Chanussot, Jocelyn
- Abstract
Over the past decades, numerous methods have been proposed to solve the linear or nonlinear mixing problems in hyperspectral unmixing (HU). The existence of spectral variabilities and nonlinearity limits, to a great extent, the unmixing ability of most traditional approaches, particularly in complex scenes. In recent years, deep learning (DL) has been garnering increasing attention in nonlinear HU owing to its powerful learning and fitting ability. However, the DL-based methods tend to generate trivial unmixing results due to the lack of considering physically meaningful endmember information. To this end, we propose a novel siamese network, called the deep half-siamese network (Deep HSNet), for HU by fully considering diverse endmember properties extracted using different endmember extraction algorithms. Moreover, the proposed Deep HSNet, beyond the previous autoencoder-like architecture, adopts another subnetwork to learn the endmember information effectively to guide the unmixing process in a reasonable and accurate way. The experimental results conducted on the synthetic and real hyperspectral data sets validate the effectiveness and superiority of the Deep HSNet over several state-of-the-art unmixing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. Recent Developments in Parallel and Distributed Computing for Remotely Sensed Big Data Processing.
- Author
-
Wu, Zebin, Sun, Jin, Zhang, Yi, Wei, Zhihui, and Chanussot, Jocelyn
- Subjects
REMOTE sensing ,ELECTRONIC data processing ,BIG data ,PARALLEL programming ,DISTRIBUTED computing ,GRID computing ,CLOUD computing - Abstract
This article gives a survey of state-of-the-art methods for processing remotely sensed big data and thoroughly investigates existing parallel implementations on diverse popular high-performance computing platforms. The pros/cons of these approaches are discussed in terms of capability, scalability, reliability, and ease of use. Among existing distributed computing platforms, cloud computing is currently the most promising solution to efficient and scalable processing of remotely sensed big data due to its advanced capabilities for high-performance and service-oriented computing. We further provide an in-depth analysis of state-of-the-art cloud implementations that seek for exploiting the parallelism of distributed processing of remotely sensed big data. In particular, we study a series of scheduling algorithms (GSs) aimed at distributing the computation load across multiple cloud computing resources in an optimized manner. We conduct a thorough review of different GSs and reveal the significance of employing scheduling strategies to fully exploit parallelism during the remotely sensed big data processing flow. We present a case study on large-scale remote sensing datasets to evaluate the parallel and distributed approaches and algorithms. Evaluation results demonstrate the advanced capabilities of cloud computing in processing remotely sensed big data and the improvements in computational efficiency obtained by employing scheduling strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Detail Injection-Based Deep Convolutional Neural Networks for Pansharpening.
- Author
-
Deng, Liang-Jian, Vivone, Gemine, Jin, Cheng, and Chanussot, Jocelyn
- Subjects
CONVOLUTIONAL neural networks ,MACHINE learning ,SPATIAL resolution - Abstract
The fusion of high spatial resolution panchromatic (PAN) data with simultaneously acquired multispectral (MS) data with the lower spatial resolution is a hot topic, which is often called pansharpening. In this article, we exploit the combination of machine learning techniques and fusion schemes introduced to address the pansharpening problem. In particular, deep convolutional neural networks (DCNNs) are proposed to solve this issue. The latter is combined first with the traditional component substitution and multiresolution analysis fusion schemes in order to estimate the nonlinear injection models that rule the combination of the upsampled low-resolution MS image with the extracted details exploiting the two philosophies. Furthermore, inspired by these two approaches, we also developed another DCNN for pansharpening. This is fed by the direct difference between the PAN image and the upsampled low-resolution MS image. Extensive experiments conducted both at reduced and full resolutions demonstrate that this latter convolutional neural network outperforms both the other detail injection-based proposals and several state-of-the-art pansharpening methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.