86 results on '"Mei, Shaohui"'
Search Results
2. WHANet:Wavelet-Based Hybrid Asymmetric Network for Spectral Super-Resolution From RGB Inputs.
- Author
-
Wang, Nan, Mei, Shaohui, Wang, Yi, Zhang, Yifan, and Zhan, Duo
- Published
- 2025
- Full Text
- View/download PDF
3. Learning hyperspectral images from RGB images via a coarse-to-fine CNN
- Author
-
Mei, Shaohui, Geng, Yunhao, Hou, Junhui, and Du, Qian
- Published
- 2022
- Full Text
- View/download PDF
4. Video summarization via block sparse dictionary selection
- Author
-
Ma, Mingyang, Mei, Shaohui, Wan, Shuai, Hou, Junhui, Wang, Zhiyong, and Feng, David Dagan
- Published
- 2020
- Full Text
- View/download PDF
5. Robust video summarization using collaborative representation of adjacent frames
- Author
-
Ma, Mingyang, Mei, Shaohui, Wan, Shuai, Wang, Zhiyong, and Feng, David Dagan
- Published
- 2019
- Full Text
- View/download PDF
6. A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking
- Author
-
Mei, Shaohui, Lian, Jiawei, Wang, Xiaofei, Su, Yuru, Ma, Mingyang, and Chau, Lap-Pui
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Deep neural networks (DNNs) have found widespread applications in interpreting remote sensing (RS) imagery. However, it has been demonstrated in previous works that DNNs are vulnerable to different types of noises, particularly adversarial noises. Surprisingly, there has been a lack of comprehensive studies on the robustness of RS tasks, prompting us to undertake a thorough survey and benchmark on the robustness of image classification and object detection in RS. To our best knowledge, this study represents the first comprehensive examination of both natural robustness and adversarial robustness in RS tasks. Specifically, we have curated and made publicly available datasets that contain natural and adversarial noises. These datasets serve as valuable resources for evaluating the robustness of DNNs-based models. To provide a comprehensive assessment of model robustness, we conducted meticulous experiments with numerous different classifiers and detectors, encompassing a wide range of mainstream methods. Through rigorous evaluation, we have uncovered insightful and intriguing findings, which shed light on the relationship between adversarial noise crafting and model training, yielding a deeper understanding of the susceptibility and limitations of various models, and providing guidance for the development of more resilient and robust models
- Published
- 2023
7. Video summarization via minimum sparse reconstruction
- Author
-
Mei, Shaohui, Guan, Genliang, Wang, Zhiyong, Wan, Shuai, He, Mingyi, and Dagan Feng, David
- Published
- 2015
- Full Text
- View/download PDF
8. Contextual adversarial attack against aerial detection in the physical world
- Author
-
Lian, Jiawei, Wang, Xiaofei, Su, Yuru, Ma, Mingyang, and Mei, Shaohui
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Deep Neural Networks (DNNs) have been extensively utilized in aerial detection. However, DNNs' sensitivity and vulnerability to maliciously elaborated adversarial examples have progressively garnered attention. Recently, physical attacks have gradually become a hot issue due to they are more practical in the real world, which poses great threats to some security-critical applications. In this paper, we take the first attempt to perform physical attacks in contextual form against aerial detection in the physical world. We propose an innovative contextual attack method against aerial detection in real scenarios, which achieves powerful attack performance and transfers well between various aerial object detectors without smearing or blocking the interested objects to hide. Based on the findings that the targets' contextual information plays an important role in aerial detection by observing the detectors' attention maps, we propose to make full use of the contextual area of the interested targets to elaborate contextual perturbations for the uncovered attacks in real scenarios. Extensive proportionally scaled experiments are conducted to evaluate the effectiveness of the proposed contextual attack method, which demonstrates the proposed method's superiority in both attack efficacy and physical practicality.
- Published
- 2023
9. EFP-Net: A Novel Building Change Detection Method Based on Efficient Feature Fusion and Foreground Perception.
- Author
-
He, Renjie, Li, Wenyao, Mei, Shaohui, Dai, Yuchao, and He, Mingyi
- Subjects
DEEP learning ,REMOTE sensing ,FUSION reactors ,FEATURE extraction - Abstract
Over the past decade, deep learning techniques have significantly advanced the field of building change detection in remote sensing imagery. However, existing deep learning-based approaches often encounter limitations in complex remote sensing scenarios, resulting in false detections and detail loss. This paper introduces EFP-Net, a novel building change detection approach that resolves the mentioned issues by utilizing effective feature fusion and foreground perception. EFP-Net comprises three main modules, the feature extraction module (FEM), the spatial–temporal correlation module (STCM), and the residual guidance module (RGM), which jointly enhance the fusion of bi-temporal features and hierarchical features. Specifically, the STCM utilizes the temporal change duality prior and multi-scale perception to augment the 3D convolution modeling capability for bi-temporal feature variations. Additionally, the RGM employs the higher-layer prediction map to guide shallow layer features, reducing the introduction of noise during the hierarchical feature fusion process. Furthermore, a dynamic Focal loss with foreground awareness is developed to mitigate the class imbalance problem. Extensive experiments on the widely adopted WHU-BCD, LEVIR-CD, and CDD datasets demonstrate that the proposed EFP-Net is capable of significantly improving accuracy in building change detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Attention-Enhanced Generative Adversarial Network for Hyperspectral Imagery Spatial Super-Resolution.
- Author
-
Wang, Baorui, Zhang, Yifan, Feng, Yan, Xie, Bobo, and Mei, Shaohui
- Subjects
GENERATIVE adversarial networks ,SPECTRAL imaging ,MULTISPECTRAL imaging ,SPATIAL resolution - Abstract
Hyperspectral imagery (HSI) with high spectral resolution contributes to better material discrimination, while the spatial resolution limited by the sensor technique prevents it from accurately distinguishing and analyzing targets. Though generative adversarial network-based HSI super-resolution methods have achieved remarkable progress, the problems of treating vital and unessential features equally in feature expression and training instability still exist. To address these issues, an attention-enhanced generative adversarial network (AEGAN) for HSI spatial super-resolution is proposed, which elaborately designs the enhanced spatial attention module (ESAM) and refined spectral attention module (RSAM) in the attention-enhanced generator. Specifically, the devised ESAM equipped with residual spatial attention blocks (RSABs) facilitates the generator that is more focused on the spatial parts of HSI that are difficult to produce and recover, and RSAM with spectral attention refines spectral interdependencies and guarantees the spectral consistency at the respective pixel positions. Additionally, an especial U-Net discriminator with spectral normalization is enclosed to pay more attention to the detailed informations of HSI and yield to stabilize the training. For producing more realistic and detailed super-resolved HSIs, an attention-enhanced generative loss is constructed to train and constrain the AEGAN model and investigate the high correlation of spatial context and spectral information in HSI. Moreover, to better simulate the complicated and authentic degradation, pseudo-real data are also generated with a high-order degradation model to train the overall network. Experiments on three benchmark HSI datasets illustrate the superior performance of the proposed AEGAN method in HSI spatial super-resolution over the compared methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Robust Signature-Based Hyperspectral Target Detection Using Dual Networks.
- Author
-
Gao, Yanlong, Feng, Yan, Yu, Xumin, and Mei, Shaohui
- Abstract
The training of deep networks for hyperspectral target detection (HTD) is usually confronted with the problem of limited samples and in extreme cases, there might be only one target sample available. To address this challenge, we propose a novel approach with dual networks in this letter. First, a training set that is not fully accurate but representative enough regarding both targets and backgrounds is built through predetection and clustering. Then, two types of neural networks, that is, one generative adversarial network (GAN) and one convolutional neural network (CNN), which focus on spectral and spatial features of hyperspectral images (HSIs), are utilized for target detection. After that, the results of the two networks are fused, with the final detection result obtained. Experiments on real HSIs indicate that the proposed approach manages to perform HTD with only one target sample and is able to yield a more robust detection performance compared to other approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Semi-Supervised Person Detection in Aerial Images with Instance Segmentation and Maximum Mean Discrepancy Distance.
- Author
-
Zhang, Xiangqing, Feng, Yan, Zhang, Shun, Wang, Nan, Mei, Shaohui, and He, Mingyi
- Subjects
RESCUE work ,IMAGE segmentation ,LOW vision ,DETECTORS ,ENTROPY - Abstract
Detecting sparse, small, lost persons with only a few pixels in high-resolution aerial images was, is, and remains an important and difficult mission, in which a vital role is played by accurate monitoring and intelligent co-rescuing for the search and rescue (SaR) system. However, many problems have not been effectively solved in existing remote-vision-based SaR systems, such as the shortage of person samples in SaR scenarios and the low tolerance of small objects for bounding boxes. To address these issues, a copy-paste mechanism (ISCP) with semi-supervised object detection (SSOD) via instance segmentation and maximum mean discrepancy distance is proposed (MMD), which can provide highly robust, multi-task, and efficient aerial-based person detection for the prototype SaR system. Specifically, numerous pseudo-labels are obtained by accurately segmenting the instances of synthetic ISCP samples to obtain their boundaries. The SSOD trainer then uses soft weights to balance the prediction entropy of the loss function between the ground truth and unreliable labels. Moreover, a novel evaluation metric MMD for anchor-based detectors is proposed to elegantly compute the IoU of the bounding boxes. Extensive experiments and ablation studies on Heridal and optimized public datasets demonstrate that our approach is effective and achieves state-of-the-art person detection performance in aerial images. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. A Spectral–Spatial Transformer Fusion Method for Hyperspectral Video Tracking.
- Author
-
Wang, Ye, Liu, Yuheng, Ma, Mingyang, and Mei, Shaohui
- Subjects
FEATURE extraction ,VIDEOS - Abstract
Hyperspectral videos (HSVs) can record more adequate detail clues than other videos, which is especially beneficial in cases of abundant spectral information. Although traditional methods based on correlation filters (CFs) employed to explore spectral information locally achieve promising results, their performances are limited by ignoring global information. In this paper, a joint spectral–spatial information method, named spectral–spatial transformer-based feature fusion tracker (SSTFT), is proposed for hyperspectral video tracking, which is capable of utilizing spectral–spatial features and considering global interactions. Specifically, the feature extraction module employs two parallel branches to extract multiple-level coarse-grained and fine-grained spectral–spatial features, which are fused with adaptive weights. The extracted features are further fused with the context fusion module based on a transformer with the hyperspectral self-attention (HSA) and hyperspectral cross-attention (HCA), which are designed to capture the self-context feature interaction and the cross-context feature interaction, respectively. Furthermore, an adaptive dynamic template updating strategy is used to update the template bounding box based on the prediction score. The extensive experimental results on benchmark hyperspectral video tracking datasets demonstrated that the proposed SSTFT outperforms the state-of-the-art methods in both precision and speed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. BiTSRS: A Bi-Decoder Transformer Segmentor for High-Spatial-Resolution Remote Sensing Images.
- Author
-
Liu, Yuheng, Zhang, Yifan, Wang, Ye, and Mei, Shaohui
- Subjects
CONVOLUTIONAL neural networks ,IMAGE segmentation ,REMOTE sensing - Abstract
Semantic segmentation of high-spatial-resolution (HSR) remote sensing (RS) images has been extensively studied, and most of the existing methods are based on convolutional neural network (CNN) models. However, the CNN is regarded to have less power in global representation modeling. In the past few years, methods using transformer have attracted increasing attention and generate improved results in semantic segmentation of natural images, owing to their powerful ability in global information acquisition. Nevertheless, these transformer-based methods exhibit limited performance in semantic segmentation of RS images, probably because of the lack of comprehensive understanding in the feature decoding process. In this paper, a novel transformer-based model named the bi-decoder transformer segmentor for remote sensing (BiTSRS) is proposed, aiming at alleviating the problem of flexible feature decoding, through a bi-decoder design for semantic segmentation of RS images. In the proposed BiTSRS, the Swin transformer is adopted as encoder to take both global and local representations into consideration, and a unique design module (ITM) is designed to deal with the limitation of input size for Swin transformer. Furthermore, BiTSRS adopts a bi-decoder structure consisting of a Dilated-Uper decoder and a fully deformable convolutional network (FDCN) module embedded with focal loss, with which it is capable of decoding a wide range of features and local detail deformations. Both ablation experiments and comparison experiments were conducted on three representative RS images datasets. The ablation analysis demonstrates the contributions of specifically designed modules in the proposed BiTSRS to performance improvement. The comparison experimental results illustrate that the proposed BiTSRS clearly outperforms some state-of-the-art semantic segmentation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. A Remote-Vision-Based Safety Helmet and Harness Monitoring System Based on Attribute Knowledge Modeling.
- Author
-
Wu, Xiao, Li, Yupeng, Long, Jihui, Zhang, Shun, Wan, Shuai, and Mei, Shaohui
- Subjects
SAFETY hats ,BUILDING sites ,IMAGE recognition (Computer vision) ,IMAGE processing ,IMAGE encryption - Abstract
Remote-vision-based image processing plays a vital role in the safety helmet and harness monitoring of construction sites, in which computer-vision-based automatic safety helmet and harness monitoring systems have attracted significant attention for practical applications. However, many problems have not been well solved in existing computer-vision-based systems, such as the shortage of safety helmet and harness monitoring datasets and the low accuracy of the detection algorithms. To address these issues, an attribute-knowledge-modeling-based safety helmet and harness monitoring system is constructed in this paper, which elegantly transforms safety state recognition into images' semantic attribute recognition. Specifically, a novel transformer-based end-to-end network with a self-attention mechanism is proposed to improve attribute recognition performance by making full use of the correlations between image features and semantic attributes, based on which a security recognition system is constructed by integrating detection, tracking, and attribute recognition. Experimental results for safety helmet and harness detection demonstrate that the accuracy and robustness of the proposed transformer-based attribute recognition algorithm obviously outperforms the state-of-the-art algorithms, and the presented system is robust to challenges such as pose variation, occlusion, and a cluttered background. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Graph Convolutional Dictionary Selection With L ₂ , ₚ Norm for Video Summarization.
- Author
-
Ma, Mingyang, Mei, Shaohui, Wan, Shuai, Wang, Zhiyong, Hua, Xian-Sheng, and Feng, David Dagan
- Subjects
- *
VIDEO summarization , *ORTHOGONAL matching pursuit , *IMAGE reconstruction - Abstract
Video Summarization (VS) has become one of the most effective solutions for quickly understanding a large volume of video data. Dictionary selection with self representation and sparse regularization has demonstrated its promise for VS by formulating the VS problem as a sparse selection task on video frames. However, existing dictionary selection models are generally designed only for data reconstruction, which results in the neglect of the inherent structured information among video frames. In addition, the sparsity commonly constrained by $L_{2,1}$ norm is not strong enough, which causes the redundancy of keyframes, i.e., similar keyframes are selected. Therefore, to address these two issues, in this paper we propose a general framework called graph convolutional dictionary selection with $L_{2,p}$ ($0< p\leq 1$) norm (GCDS $_{2,p}$) for both keyframe selection and skimming based summarization. Firstly, we incorporate graph embedding into dictionary selection to generate the graph embedding dictionary, which can take the structured information depicted in videos into account. Secondly, we propose to use $L_{2,p}$ ($0< p\leq 1$) norm constrained row sparsity, in which $p$ can be flexibly set for two forms of video summarization. For keyframe selection, $0< p< 1$ can be utilized to select diverse and representative keyframes; and for skimming, $p=1$ can be utilized to select key shots. In addition, an efficient iterative algorithm is devised to optimize the proposed model, and the convergence is theoretically proved. Experimental results including both keyframe selection and skimming based summarization on four benchmark datasets demonstrate the effectiveness and superiority of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Spectral Variability Augmented Sparse Unmixing of Hyperspectral Images.
- Author
-
Zhang, Ge, Mei, Shaohui, Xie, Bobo, Ma, Mingyang, Zhang, Yifan, Feng, Yan, and Du, Qian
- Subjects
- *
PRODUCT image , *SPARSE matrices , *PIXELS , *MACHINE learning , *MECHANICAL properties of condensed matter , *SOFTWARE product line engineering - Abstract
Spectral unmixing expresses the mixed pixels existing in hyperspectral images as the product of endmembers and their corresponding fractional abundances, which has been widely used in hyperspectral imagery analysis. However, the endmember spectra even for pixels from the same material of an image may include variability due to the influence of lighting conditions and inherent properties of materials within different pixels. Though the in situ spectral library has been used to accommodate such variability by using multiple in situ spectra to represent each kind of material, the performance improvement may be restricted due to the limited number of endmembers for each material. Therefore, in this article, spectral variability is directly extracted from an in situ endmember library and considered to be transferable among different endmembers for the first time. Furthermore, such a spectral variability is further used to augment sparse unmixing by synchronously performing endmember-based reconstruction and spectral variability-augmented reconstruction in the sparse unmixing model. By, respectively, imposing sparse and smoothness regularization over abundances and variability coefficients, a convex optimization-based spectral variability augmented sparse unmixing (SVASU) is finally proposed, and its convergence performance is also analyzed. Experiments conducted over synthetic and real-world datasets demonstrate that the proposed SVASU method not only significantly improves the unmixing performance of conventional spectral library-based unmixing but also outperforms several state-of-the-art sparse unmixing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Similarity Based Block Sparse Subset Selection for Video Summarization.
- Author
-
Ma, Mingyang, Mei, Shaohui, Wan, Shuai, Wang, Zhiyong, Feng, David Dagan, and Bennamoun, Mohammed
- Subjects
- *
VIDEO summarization , *SUBSET selection , *VECTOR spaces , *BLOCK codes , *DEEP learning , *VIDEOS , *SPARSE matrices - Abstract
Video summarization (VS) is generally formulated as a subset selection problem where a set of representative keyframes or key segments is selected from an entire video frame set. Though many sparse subset selection based VS algorithms have been proposed in the past decade, most of them adopt linear sparse formulation in the explicit feature vector space of video frames, and don’t consider the local or global relationships among frames. In this paper, we first extend the conventional sparse subset selection for VS into kernel block sparse subset selection (KBS3) to utilize the advantage of kernel sparse coding and introduce a local inter-frame relationship through packing of frame blocks. Going a step further, we propose a similarity based block sparse subset selection (SB2S3) model by applying a specially designed transformation matrix on the KBS3 model in order to introduce a kind of global inter-frame relationship through the similarity. Finally, a greedy pursuit based algorithm is devised for the proposed NP-hard model optimization. The proposed SB2S3 has the following advantages: 1) through the similarity between each frame and any other frame, the global relationship among all frames can be considered; 2) through block sparse coding, the local relationship of adjacent frames is further considered; and 3) it has a wider application, since features can derive similarity, but not vice versa. It is believed that the effect of modeling such global and local relationships among frames in this paper, is similar to that of modeling the long-range and short-range dependencies among frames in deep learning based methods. Experimental results on three benchmark datasets have demonstrated that the proposed approach is superior to not only other sparse subset selection based VS methods but also most unsupervised deep-learning based VS methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Keyframe Extraction From Laparoscopic Videos via Diverse and Weighted Dictionary Selection.
- Author
-
Ma, Mingyang, Mei, Shaohui, Wan, Shuai, Wang, Zhiyong, Ge, Zongyuan, Lam, Vincent, and Feng, Dagan
- Subjects
SURGICAL robots ,LAPAROSCOPIC surgery ,VIDEOS ,CONVOLUTIONAL neural networks ,MINIMALLY invasive procedures ,MEDICAL personnel - Abstract
Laparoscopic videos have been increasingly acquired for various purposes including surgical training and quality assurance, due to the wide adoption of laparoscopy in minimally invasive surgeries. However, it is very time consuming to view a large amount of laparoscopic videos, which prevents the values of laparoscopic video archives from being well exploited. In this paper, a dictionary selection based video summarization method is proposed to effectively extract keyframes for fast access of laparoscopic videos. Firstly, unlike the low-level feature used in most existing summarization methods, deep features are extracted from a convolutional neural network to effectively represent video frames. Secondly, based on such a deep representation, laparoscopic video summarization is formulated as a diverse and weighted dictionary selection model, in which image quality is taken into account to select high quality keyframes, and a diversity regularization term is added to reduce redundancy among the selected keyframes. Finally, an iterative algorithm with a rapid convergence rate is designed for model optimization, and the convergence of the proposed method is also analyzed. Experimental results on a recently released laparoscopic dataset demonstrate the clear superiority of the proposed methods. The proposed method can facilitate the access of key information in surgeries, training of junior clinicians, explanations to patients, and archive of case files. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Rotation-Invariant Feature Learning in VHR Optical Remote Sensing Images via Nested Siamese Structure With Double Center Loss.
- Author
-
Jiang, Ruoqiao, Mei, Shaohui, Ma, Mingyang, and Zhang, Shun
- Subjects
- *
OPTICAL remote sensing , *CONVOLUTIONAL neural networks , *MACHINE learning , *ARTIFICIAL neural networks - Abstract
Rotation-invariant features are of great importance for object detection and image classification in very-high-resolution (VHR) optical remote sensing images. Though multibranch convolutional neural network (mCNN) has been demonstrated to be very effective for rotation-invariant feature learning, how to effectively train such a network is still an open problem. In this article, a nested Siamese structure (NSS) is proposed for training the mCNN to learn effective rotation-invariant features, which consists of an inner Siamese structure to enhance intraclass cohesion and an outer Siamese structure to enlarge interclass margin. Moreover, a double center loss (DCL) function, in which training samples from the same class are mapped closer to each other while those from different classes are mapped far away to each other, is proposed to train the proposed NSS even with a small amount of training samples. Experimental results over three benchmark data sets demonstrate that the proposed NSS trained by DCL is very effective to encounter rotation varieties when learning features for image classification and outperforms several state-of-the-art rotation-invariant feature learning algorithms even when a small amount of training samples are available. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Patch Based Video Summarization With Block Sparse Representation.
- Author
-
Mei, Shaohui, Ma, Mingyang, Wan, Shuai, Hou, Junhui, Wang, Zhiyong, and Feng, David Dagan
- Abstract
In recent years, sparse representation has been successfully utilized for video summarization (VS). However, most of the sparse representation based VS methods characterize each video frame with global features. As a result, some important local details could be neglected by global features, which may compromise the performance of summarization. In this paper, we propose to partition each video frame into a number of patches and characterize each patch with global features. Instead of concatenating the features of each patch and utilizing conventional sparse representation, we formulate the VS problem with such video frame representation as block sparse representation by considering each video frame as a block containing a number of patches. By taking the reconstruction constraint into account, we devise a simultaneous version of block-based OMP (Orthogonal Matching Pursuit) algorithm, namely SBOMP, to solve the proposed model. The proposed model is further extended to a neighborhood based model which considers temporally adjacent frames as a super block. This is one of the first sparse representation based VS methods taking both spatial and temporal contexts into account with blocks. Experimental results on two widely used VS datasets have demonstrated that our proposed methods present clear superiority over existing sparse representation based VS methods and are highly comparable to some deep learning ones requiring supervision information for extra model training. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. A Novel Compressive Sensing-Based Multichannel HRWS SAR Imaging Technique for Moving Targets.
- Author
-
Li, Shaojie, Mei, Shaohui, Zhang, Shuangxi, Wan, Shuai, and Jia, Tao
- Abstract
When high-resolution wide-swath (HRWS) multichannel synthetic aperture radar (MC-SAR) system is used for ocean observation, vast amount of redundant data is generated, significantly limiting its applications. Though compressive sensing (CS)-based method has been applied to the traditional single-channel or dual-channel SAR imaging system, it is no longer applicable for MC-SAR due to the existence of channel error when using the space-time equivalent sampling technique for azimuth signal reconstruction. By analyzing such periodic channel error, i.e., frequency-dependence phase mismatch (FD-PM), in this article, a novel dictionary is constructed for CS-based HRWS MC-SAR imaging after an improved range cell migration correction method is applied. As a result, a novel CS imaging mode is proposed for the ocean moving target based on the sparsity of the target scattering centers, by which the amount of data in MC-SAR can be reduced by sampling below the Nyquist sampling rate and the swath width can be further increased. Experimental results show that the proposed method clearly eliminates the azimuth defocus and blur caused by low sampling rate and FD-PM, and significantly reduces the amount of data to about one-third when compared to sampling at the Nyquist rate. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. Hyperspectral Image Classification via Sparse Representation With Incremental Dictionaries.
- Author
-
Yang, Shujun, Hou, Junhui, Jia, Yuheng, Mei, Shaohui, and Du, Qian
- Abstract
In this letter, we propose a new sparse representation (SR)-based method for hyperspectral image (HSI) classification, namely SR with incremental dictionaries (SRID). Our SRID boosts existing SR-based HSI classification methods significantly, especially when used for the task with extremely limited training samples. Specifically, by exploiting unlabeled pixels with spatial information and multiple-feature-based SR classifiers, we select and add some of them to dictionaries in an iterative manner, such that the representation abilities of the dictionaries are progressively augmented, and likewise more discriminative representations. In addition, to deal with large-scale data sets, we use a certainty sampling strategy to control the sizes of the dictionaries, such that the computational complexity is well balanced. Experiments over two benchmark data sets show that our proposed method achieves higher classification accuracy than the state-of-the-art methods, i.e., the overall classification accuracy can improve more than 4%. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Spatial and Spectral Joint Super-Resolution Using Convolutional Neural Network.
- Author
-
Mei, Shaohui, Jiang, Ruituo, Li, Xu, and Du, Qian
- Subjects
- *
CONVOLUTIONAL neural networks , *SPECTRAL imaging - Abstract
Many applications have benefited from the images with both high spatial and spectral resolution, such as mineralogy and surveillance. However, it is difficult to acquire such images due to the limitation of sensor technologies. Recently, super-resolution (SR) techniques have been proposed to improve the spatial or spectral resolution of images, e.g., improving the spatial resolution of hyperspectral images (HSIs) or improving spectral resolution of color images (reconstructing HSIs from RGB inputs). However, none of the researches attempted to improve both spatial and spectral resolution together. In this article, these two types of resolution are jointly improved using convolutional neural network (CNN). Specifically, two kinds of CNN-based SR are conducted, including a simultaneous spatial–spectral joint SR (SimSSJSR) that conducts SR in spectral and spatial domain simultaneously and a separated spatial–spectral joint SR (SepSSJSR) that considers spectral and spatial SR sequentially. In the proposed SimSSJSR, a full 3-D CNN is constructed to learn an end-to-end mapping between a low spatial-resolution mulitspectral image (LR-MSI) and the corresponding high spatial-resolution HSI (HR-HSI). In the proposed SepSSJSR, a spatial SR network and a spectral SR network are designed separately, and thus two different frameworks are proposed for SepSSJSR, namely SepSSJSR1 and SepSSJSR2, according to the order that spatial SR and spectral SR are applied. Furthermore, the least absolute deviation, instead of mean square error (MSE) in traditional SR networks, is chosen as the loss function for the proposed networks. Experimental results over simulated images from different sensors demonstrated that the proposed SepSSJSR1 is most effective to improve spatial and spectral resolution of MSIs sequentially by conducting spatial SR prior to spectral SR. In addition, validation on real Landsat images also indicates that the proposed SSJSR techniques can make full use of available MSIs for high-resolution-based analysis or applications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Vision-Based Freezing of Gait Detection With Anatomic Directed Graph Representation.
- Author
-
Hu, Kun, Wang, Zhiyong, Mei, Shaohui, Ehgoetz Martens, Kaylena A., Yao, Tingting, Lewis, Simon J. G., and Feng, David Dagan
- Subjects
DIRECTED graphs ,REPRESENTATIONS of graphs ,VIDEO production & direction ,FREEZING ,PARKINSON'S disease - Abstract
Parkinson's disease significantly impacts the life quality of millions of people around the world. While freezing of gait (FoG) is one of the most common symptoms of the disease, it is time consuming and subjective to assess FoG for well-trained experts. Therefore, it is highly desirable to devise computer-aided FoG detection methods for the purpose of objective and time-efficient assessment. In this paper, in line with the gold standard of FoG clinical assessment, which requires video or direct observation, we propose one of the first vision-based methods for automatic FoG detection. To better characterize FoG patterns, instead of learning an overall representation of a video, we propose a novel architecture of graph convolution neural network and represent each video as a directed graph where FoG related candidate regions are the vertices. A weakly-supervised learning strategy and a weighted adjacency matrix estimation layer are proposed to eliminate the resource expensive data annotation required for fully supervised learning. As a result, the interference of visual information irrelevant to FoG, such as gait motion of supporting staff involved in clinical assessments, has been reduced to improve FoG detection performance by identifying the vertices contributing to FoG events. To further improve the performance, the global context of a clinical video is also considered and several fusion strategies with graph predictions are investigated. Experimental results on more than 100 videos collected from 45 patients during a clinical assessment demonstrated promising performance of our proposed method with an AUC of 0.887. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. Local Spectral Similarity Preserving Regularized Robust Sparse Hyperspectral Unmixing.
- Author
-
Li, Jiaojiao, Li, Yunsong, Song, Rui, Mei, Shaohui, and Du, Qian
- Subjects
PIXELS ,CANNING & preserving ,RESEMBLANCE (Philosophy) ,PROCESS optimization - Abstract
Spatial context has been demonstrated to be effective to constrain sparse unmixing (SU) of hyperspectral images. However, the existing algorithms employed simple spatial information without keeping spectral fidelity. By considering the fact that adjacent pixels own not only the endmembers with same variations but also approximated fractional abundances, in this paper, local spectral similarity preserving (LSSP) constraint is proposed to preserve spectral similarity in a local area during robust sparse unmixing (RSU). Specially, four LSSP constraints are constructed using different-norm-constrained pixel-level difference over abundance-level difference in a local area. Moreover, a convex optimization algorithm is proposed to solve the proposed LSSP-constrained RSU (LSSP-RSU). Experimental results on both synthetic and real hyperspectral data demonstrate that the developed algorithms yield better values of the signal-to-reconstruction error (SRE). Especially, when using $l_{2}$ norm of pixel-level difference to weight the $l_{1}$ norm of abundance-level difference, the proposed LSSP-RSU algorithm can achieve superior unmixing performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Unsupervised Spatial–Spectral Feature Learning by 3D Convolutional Autoencoder for Hyperspectral Classification.
- Author
-
Mei, Shaohui, Ji, Jingyu, Geng, Yunhao, Zhang, Zhi, Li, Xu, and Du, Qian
- Subjects
- *
ARTIFICIAL neural networks , *THREE-dimensional display systems , *DATA mining , *FEATURE extraction , *CLASSIFICATION algorithms , *LEARNING strategies , *REMOTE sensing - Abstract
Feature learning technologies using convolutional neural networks (CNNs) have shown superior performance over traditional hand-crafted feature extraction algorithms. However, a large number of labeled samples are generally required for CNN to learn effective features under classification task, which are hard to be obtained for hyperspectral remote sensing images. Therefore, in this paper, an unsupervised spatial–spectral feature learning strategy is proposed for hyperspectral images using 3-Dimensional (3D) convolutional autoencoder (3D-CAE). The proposed 3D-CAE consists of 3D or elementwise operations only, such as 3D convolution, 3D pooling, and 3D batch normalization, to maximally explore spatial–spectral structure information for feature extraction. A companion 3D convolutional decoder network is also designed to reconstruct the input patterns to the proposed 3D-CAE, by which all the parameters involved in the network can be trained without labeled training samples. As a result, effective features are learned in an unsupervised mode that label information of pixels is not required. Experimental results on several benchmark hyperspectral data sets have demonstrated that our proposed 3D-CAE is very effective in extracting spatial–spectral features and outperforms not only traditional unsupervised feature extraction algorithms but also many supervised feature extraction algorithms in classification application. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Pseudolabel Guided Kernel Learning for Hyperspectral Image Classification.
- Author
-
Yang, Shujun, Hou, Junhui, Jia, Yuheng, Mei, Shaohui, and Du, Qian
- Abstract
In this paper, we propose a new framework for hyperspectral image classification, namely pseudolabel guided kernel learning (PLKL). The proposed framework is capable of fully utilizing unlabeled samples, making it very effective to handle the task with extremely limited training samples. Specifically, with multiple initial kernels and labeled samples, we first employ support vector machine (SVM) classifiers to predict pseudolabels independently for each unlabeled sample, and consistency voting is applied to the resulting pseudolabels to select and add a few unlabeled samples to the training set. Then, we refine the kernels to improve their discriminability with the augmented training set and a typical kernel learning method. Such phases are repeated until stable. Furthermore, we enhance the PLKL in terms of both the computation and memory efficiencies by using a bagging-like strategy, improving its practicality for large scale datasets. In addition, the proposed framework is quite flexible and general. That is, other advanced kernel-based methods can be incorporated to continuously improve the performance. Experimental results show that the proposed frameworks achieve much higher classification accuracy, compared with state-of-the-art methods. Especially, the classification accuracy improves more than 5% with very few training samples. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. Simultaneous Spatial and Spectral Low-Rank Representation of Hyperspectral Images for Classification.
- Author
-
Mei, Shaohui, Hou, Junhui, Chen, Jie, Chau, Lap-Pui, and Du, Qian
- Subjects
- *
IMAGE analysis , *DETECTORS , *REMOTE sensing , *ATMOSPHERIC aerosols , *SIGNAL denoising - Abstract
Arising from various environmental and atmos- pheric conditions and sensor interference, spectral variations are inevitable during hyperspectral remote sensing, which degrade the subsequent hyperspectral image analysis significantly. In this paper, we propose simultaneous spatial and spectral low-rank representation (S3LRR) that can effectively suppress the within-class spectral variations for classification purposes. The S3LRR recovers an intrinsic component with the same dimension as the original image, in which both spatial and spectral low-rank priors are adopted to regularize the intrinsic component simultaneously and compensate to each other, together with robust modeling of spectral variations. Compared with existing methods that explore only the spectral low-rank prior, the novel spatial low-rank prior (i.e., low-rank prior in band-wise) can take the spatial structure information of hyperspectral images into account, which has demonstrated to be very useful. Technically, we formulate S3LRR as a constrained convex optimization problem, and solve it using the efficient inexact augmented Lagrangian multiplier method. The resulting intrinsic component is less interfered by within-class spectral variations, and more discriminatory to offer higher classification accuracy. Comprehensive experiments on benchmark data sets demonstrate that the proposed S3LRR improves classification accuracy significantly, which outperforms state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Learning Sensor-Specific Spatial-Spectral Features of Hyperspectral Images via Convolutional Neural Networks.
- Author
-
Mei, Shaohui, Ji, Jingyu, Hou, Junhui, Li, Xu, and Du, Qian
- Subjects
- *
ARTIFICIAL neural networks , *HYPERSPECTRAL imaging systems , *IMAGING systems , *GEOGRAPHIC spatial analysis , *SPECTRUM analysis - Abstract
Convolutional neural network (CNN) is well known for its capability of feature learning and has made revolutionary achievements in many applications, such as scene recognition and target detection. In this paper, its capability of feature learning in hyperspectral images is explored by constructing a five-layer CNN for classification (C-CNN). The proposed C-CNN is constructed by including recent advances in deep learning area, such as batch normalization, dropout, and parametric rectified linear unit (PReLU) activation function. In addition, both spatial context and spectral information are elegantly integrated into the C-CNN such that spatial-spectral features are learned for hyperspectral images. A companion feature-learning CNN (FL-CNN) is constructed by extracting fully connected feature layers in this C-CNN. Both supervised and unsupervised modes are designed for the proposed FL-CNN to learn sensor-specific spatial-spectral features. Extensive experimental results on four benchmark data sets from two well-known hyperspectral sensors, namely airborne visible/infrared imaging spectrometer (AVIRIS) and reflective optics system imaging spectrometer (ROSIS) sensors, demonstrate that our proposed C-CNN outperforms the state-of-the-art CNN-based classification methods, and its corresponding FL-CNN is very effective to extract sensor-specific spatial-spectral features for hyperspectral applications under both supervised and unsupervised modes. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
31. Hyperspectral Image Classification by Exploring Low-Rank Property in Spectral or/and Spatial Domain.
- Author
-
Mei, Shaohui, Bi, Qianqian, Ji, Jingyu, Hou, Junhui, and Du, Qian
- Abstract
Within-class spectral variation, which is caused by varied imaging conditions, such as changes in illumination, environmental, atmospheric, and temporal conditions, significantly degrades the performance of hyperspectral image classification. Recent studies have shown that such spectral variation can be alleviated by exploring the low-rank property in the spectral domain, especially based on the low-rank subspace assumption. In this paper, the low-rank subspace assumption is approached by exploring the low-rank property in the local spectral domain. In addition, the low-rank property in the spatial domain is also explored to alleviate spectral variation. As a result, two novel spectral-spatial low-rank (SSLR) strategies are designed to alleviate spectral variation by exploring the low-rank property in both spectral and spatial domains. Experimental results on two benchmark hyperspectral datasets demonstrate that exploring the low-rank property in local spectral space can help to alleviate spectral variation and improve the performance of classification obviously for all tested data, while exploring the low-rank property in spatial space is more effective for images presenting large homogeneous areas. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
32. Integrating spectral and spatial information into deep convolutional Neural Networks for hyperspectral classification.
- Author
-
Mei, Shaohui, Ji, Jingyu, Bi, Qianqian, Hou, Junhui, Du, Qian, and Li, Wei
- Published
- 2016
- Full Text
- View/download PDF
33. How to fully explore the low-rank property for data recovery of hyperspectral images.
- Author
-
Mei, Shaohui, Bi, Qianqian, Ji, Jingyu, Hou, Junhui, and Du, Qian
- Published
- 2016
- Full Text
- View/download PDF
34. Spectral Variation Alleviation by Low-Rank Matrix Approximation for Hyperspectral Image Analysis.
- Author
-
Mei, Shaohui, Bi, Qianqian, Ji, Jingyu, Hou, Junhui, and Du, Qian
- Abstract
Spectral variation is profound in remotely sensed images due to variable imaging conditions. The wide presence of such spectral variation degrades the performance of hyperspectral analysis, such as classification and spectral unmixing. In this letter, \ell1-based low-rank matrix approximation is proposed to alleviate spectral variation for hyperspectral image analysis. Specifically, hyperspectral image data are decomposed into a low-rank matrix and a sparse matrix, and it is assumed that intrinsic spectral features are represented by the low-rank matrix and spectral variation is accommodated by the sparse matrix. As a result, the performance of image data analysis can be improved by working on the low-rank matrix. Experiments on benchmark hyperspectral data sets demonstrate the performance of classification, and spectral unmixing can be clearly improved by the proposed approach. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
35. Spatial preprocessing for spectral endmember extraction by local linear embedding.
- Author
-
Mei, Shaohui, Du, Qian, He, Mingyi, and Wang, Yihang
- Published
- 2015
- Full Text
- View/download PDF
36. Hyperspectral and multispectral image fusion using CNMF with minimum endmember simplex volume and abundance sparsity constraints.
- Author
-
Zhang, Yifan, Wang, Yakun, Liu, Yang, Zhang, Chuwen, He, Mingyi, and Mei, Shaohui
- Published
- 2015
- Full Text
- View/download PDF
37. Onboard image selection for small-satellite based remote sensing mission.
- Author
-
Wang, Yihang, Mei, Shaohui, Wan, Shuai, Wang, Yi, and Li, Yi
- Published
- 2015
- Full Text
- View/download PDF
38. Resource restricted on-line Video Summarization with Minimum Sparse Reconstruction.
- Author
-
Mei, Shaohui, Wang, Zhiyong, He, Mingyi, and Feng, Dagan
- Published
- 2015
- Full Text
- View/download PDF
39. Equivalent-Sparse Unmixing Through Spatial and Spectral Constrained Endmember Selection From an Image-Derived Spectral Library.
- Author
-
Mei, Shaohui, Du, Qian, and He, Mingyi
- Abstract
Spectral variation, which is inevitably present in hyperspectral data due to nonuniformity and inconsistency of illumination, may result in considerable difficulty in spectral unmixing. In this paper, a field endmember library is constructed to accommodate spectral variation by representing each endmember class by a batch of image-derived spectra. In order to perform unmixing by such a field endmember library, a novel spatial and spectral endmember selection (SSES) algorithm is designed to search for a spatial and spectral constrained endmember subset per pixel for abundance estimation (AE). The net effect is to achieve sparse unmixing equivalently, considering the fact that only a few endmembers in the large library have nonzero abundances. Thus, the resulting algorithm is called spatial and spectral constrained sparse unmixing (SSCSU). Experimental results using both synthetic and real hyperspectral images demonstrate that the proposed SSCSU algorithm not only improves the performance of traditional AE algorithms by considering spectral variation, but also outperforms the existing sparse unmixing approaches. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
40. Iterative keyframe selection by orthogonal subspace projection.
- Author
-
Mei, Shaohui, Guan, Genliang, Wang, Zhiyong, He, Mingyi, Wan, Shuai, and Feng, David Dagan
- Published
- 2014
- Full Text
- View/download PDF
41. Data structure based discriminant score for feature selection.
- Author
-
Wei, Feng, He, Mingyi, Mei, Shaohui, and Lei, Tao
- Published
- 2014
- Full Text
- View/download PDF
42. L2,0 constrained sparse dictionary selection for video summarization.
- Author
-
Mei, Shaohui, Guan, Genliang, Wang, Zhiyong, He, Mingyi, Hua, Xian-Sheng, and Dagan Feng, David
- Published
- 2014
- Full Text
- View/download PDF
43. Improving hyperspectral image classification accuracy using Iterative SVM with spatial-spectral information.
- Author
-
He, Mingyi, Imran, Farid Muhammad, Belkacem, Baassou, and Mei, Shaohui
- Published
- 2013
- Full Text
- View/download PDF
44. An accurate SVM-based classification approach for hyperspectral image classification.
- Author
-
Baassou, Belkacem, He, Mingyi, and Mei, Shaohui
- Published
- 2013
- Full Text
- View/download PDF
45. Neighborhood preserving Nonnegative Matrix Factorization for spectral mixture analysis.
- Author
-
Mei, Shaohui, He, Mingyi, Shen, Zhiming, and Belkacem, Baassou
- Abstract
Nonnegative Matrix Factorization (NMF) has been successfully employed to address the mixed-pixel problem of hyperspectral remote sensing images. However, minimizing the representation error by NMF is not sufficient for SMA since the unmixing results of NMF are not unique. Therefore, in this paper, a neighborhood preserving regularization, which preserves the local structure of the hyperspectral data on a low-dimensional manifold, is proposed to constrain NMF for unique solution in SMA. As a result, a Neighborhood Preserving constrained NMF (NP-NMF) algorithm is proposed for SMA of highly mixed hyperspectral data. Finally, experimental results on AVIRIS data demonstrate the effectiveness of our proposed NP-NMF algorithm for SMA applications. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
46. Unsupervised hyperspectral image classification algorithm by integrating spatial-spectral information.
- Author
-
Baassou, Belkacem, He, Mingyi, Mei, Shaohui, and Zhang, Yifan
- Abstract
An integrated spatial-spectral information algorithm for hyper spectral image classification is proposed, which uses spatial pixel association (SPA)by exploiting spectral information divergence (SID), and spectral clustering to reduce regions number and improve classification accuracy. Moreover, a class boundary correction method is also developed to minimize the misclassified pixels at the edge of each class and to solve the problem of merged classes. Experiments with hyper spectral data demonstrate the effectiveness and advantages of the proposed frame work over some traditional methods in term of classification accuracy. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
47. Alleviating blocking artifacts via curve-fitting.
- Author
-
Xu, Chenyu, He, Renjie, Mei, Shaohui, and He, Mingyi
- Abstract
Image enhancement in discrete cosine transform (DCT) domain, which has been widely utilized, is affected by blocking artifacts seriously. Various effective methods for reducing such blocking artifacts are paid much attention. A novel method for reducing the artifacts is proposed in this paper; the novelty of this method is developing a curve-fitting algorithm to alleviate blocking artifacts and a least-square based technique is adopted to estimate the coefficients for curve fitting. As a result, the blocking artifacts can be significantly alleviated. In addition, the color and contrast of resulted image can be kept as that in the adjusted image (with blocking artifacts). Experiments on different images demonstrate that the proposed algorithm can provide better results than that of the popular CES algorithm[1] (Enhancement of color images by scaling the DCT coefficients) and has lower computational complexity. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
48. Video Summarization with Global and Local Features.
- Author
-
Guan, Genliang, Wang, Zhiyong, Yu, Kaimin, Mei, Shaohui, He, Mingyi, and Feng, Dagan
- Abstract
Video summarization has been crucial for effective and efficient access of video content due to the ever increasing amount of video data. Most of the existing key frame based summarization approaches represent individual frames with global features, which neglects the local details of visual content. Considering that a video generally depicts a story with a number of scenes in different temporal order and shooting angles, we formulate scene summarization as identifying a set of frames which best covers the key point pool constructed from the scene. Therefore, our approach is a two-step process, identifying scenes and selecting representative content for each scene. Global features are utilized to identify scenes through clustering due to the visual similarity among video frames of the same scene, and local features to summarize each scene. We develop a key point based key frame selection method to identify representative content of a scene, which allows users to flexibly tune summarization length. Our preliminary results indicate that the proposed approach is very promising and potentially robust to clustering based scene identification. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
49. Unsupervised Spectral Mixture Analysis with Hopfield Neural Network for hyperspectral images.
- Author
-
Mei, Shaohui, He, Mingyi, Wang, Zhiyong, and Feng, Dagan
- Abstract
Spectral Mixture Analysis (SMA) has been widely utilized to address the mixed-pixel problem in the quantitative analysis of hyperspectral remote sensing images. Recently Nonnegative Matrix Factorization (NMF) has been successfully utilized to simultaneously perform endmember extraction (EE) and abundance estimation (AE). In this paper, we formulate the solution of NMF by performing EE and AE iteratively. Based on our previous Hopfield Neural Network (HNN) based AE algorithm, an HNN is also constructed for EE to solve the multiplicative updating problem of NMF for SMA. As a result, SMA is conducted in an unsupervised manner and our algorithm is able to extract virtual endmembers without assuming the presence of spectrally pure constituents in hyperspectral scenes. We further extend such strategy to solve the constrained NMF (cNMF) models for SMA, where extra constraints are imposed to better model the mixed-pixel problem. Experimental results on both synthetic and real hyperspectral images demonstrate the effectiveness of our proposed HNN based unsupervised SMA algorithms. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
50. Unmixing approach for hyperspectral data resolution enhancement using high resolution multispectral image.
- Author
-
Bendoumi, Mohamed Amine, He, Mingyi, Mei, Shaohui, and Zhang, Yifan
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.