11 results on '"Zhang, Weidong"'
Search Results
2. Color correction and adaptive contrast enhancement for underwater image enhancement
- Author
-
Zhang, Weidong, primary, Pan, Xipeng, additional, Xie, Xiwang, additional, Li, Lingqiao, additional, Wang, Zimin, additional, and Han, Chu, additional
- Published
- 2021
- Full Text
- View/download PDF
3. Multi-feature embedded learning SVM for cloud detection in remote sensing images.
- Author
-
Zhang, Weidong, Jin, Songlin, Zhou, Ling, Xie, Xiwang, Wang, Fangyuan, Jiang, Lili, Zheng, Ying, Qu, Peixin, Li, Guohou, and Pan, Xipeng
- Subjects
- *
REMOTE sensing , *SUPPORT vector machines , *IMAGE transmission , *OPTICAL remote sensing - Abstract
To improve remote sensing image transmission efficiency, we propose a cloud detection method using a multi-feature embedded learning support vector machine (SVM) to address cloud coverage occupying channel transmission bandwidth. Specifically, we first consider the imaging and physical properties of the clouds to construct a multi-feature space of cloud and non-cloud samples, which mainly includes five valuable features of grayscale, geometry, contrast, correlation, and angular second moment. Subsequently, we regard cloud detection (CDRSI) of remote sensing images as a binary classification problem, and construct a classifier by using multi-feature embedded learning SVM. Finally, the CDRSI is implemented by image block operations. Additionally, we build a large-scale real-world Remote Sensing Image Cloud Detection Benchmark (RSICDB) including 1520 images, where 790 non-cloud images and 430 cloud images are used as training datasets, 150 of which as test samples with the corresponding 150 mask results. Experimental results demonstrate that the proposed method can detect clouds with higher accuracy and robustness than compared methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Spatial-spectral feature extraction of hyperspectral images for wheat seed identification.
- Author
-
Jin, Songlin, Zhang, Weidong, Yang, Pengfei, Zheng, Ying, An, Jinliang, Zhang, Ziyang, Qu, Peixin, and Pan, Xipeng
- Subjects
- *
WHEAT seeds , *PRINCIPAL components analysis , *SUPPORT vector machines , *IMAGE recognition (Computer vision) - Abstract
Hyperspectral images of wheat can identify seeds quickly, accurately, and nondestructively. However, most of the existing hyperspectral classification methods only use spectral information but ignore spatial information, resulting in unsatisfactory classification performance. To address these issues, we propose a spatial-spectral feature extraction method to identify seeds. Specifically, we first fuse the spatial and spectral features and then perform denoising. Subsequently, the principal component analysis is employed to extract features from the spatial-spectral data. Ultimately, the support vector machine trains and optimizes the model. Experimental results demonstrate that our method has the highest classification accuracy compared with the state-of-the-art methods. The classification accuracy of our method is achieved at 97.64% on the whole dataset. In addition, our method achieves better classification performance for small sample data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. A context hierarchical integrated network for medical image segmentation.
- Author
-
Xie, Xiwang, Pan, Xipeng, Zhang, Weidong, and An, Jubai
- Subjects
- *
DIAGNOSTIC imaging , *IMAGE segmentation , *OBJECT recognition (Computer vision) , *FEATURE extraction - Abstract
Arising from low contrast, high similarity, and different scales between diverse tissues in 2D medical images, it is challenging to accurately segment the regions of interest. To address these issues, we propose a context hierarchical integrated network named CHI-Net for medical image segmentation, which can accurately segment salient regions from medical images in a purely task-driven manner. The proposed CHI-Net consists of two key modules, i.e., a dense dilated convolution (DDC) and a stacked residual pooling (SRP). Specifically, The DDC module can capture substantial complementary features hierarchically by combining four cascaded branches of hybrid dilated convolutions, which is conducive to extracting the features of diverse scales. The SRP module integrates encoder detail features by multiple effective field-of-views, which aims at generating more discriminative features. Extensive experimental results on five benchmark datasets with different objects illustrate that the proposed CHI-Net is superior to the state-of-the-art object segmentation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Underwater image enhancement via complementary advantage fusion of global and local contrast.
- Author
-
Zhou, Ling, Liu, Qingmin, Fan, Yuqian, Song, Xiaoyu, Pan, Xipeng, and Zhang, Weidong
- Subjects
- *
IMAGE intensifiers , *IMAGE enhancement (Imaging systems) , *MARKETING channels , *HISTOGRAMS , *PIXELS - Abstract
Underwater images encounter a range of quality degradation challenges due to most wavelengths of light being attenuated by varying degrees of absorption in traveling underwater. To cope with these issues, we present a complementary advantage fusion method of global and local contrast, named CAFM. Successively, CAFM first compensate for the attenuation of each channel by exploiting the pixel intensity and distribution of each channel to get an image without color distortion. Subsequently, we employ the double histogram optimization contrast method to enhance the global contrast of the preprocessed image, while the mean and variance features of image blocks are utilized to enhance the local contrast of the preprocessed image. To get a high-quality underwater image, we employ a complementary advantage fusion method to combine the benefits of the two enhanced images via the complementary advantages between different feature maps. Extensive evaluation of three datasets demonstrates that our CAFM surpasses the compared methods. Additionally, the underwater images are enhanced by our CAFM with authentic colors, heightened contrast, and rich texture detailing. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. CSKNN: Cost-sensitive K-Nearest Neighbor using hyperspectral imaging for identification of wheat varieties.
- Author
-
Jin, Songlin, Zhang, Fengfan, Zheng, Ying, Zhou, Ling, Zuo, Xiangang, Zhang, Ziyang, Zhao, Wenyi, Zhang, Weidong, and Pan, Xipeng
- Subjects
- *
K-nearest neighbor classification , *FISHER discriminant analysis - Abstract
Hyperspectral imaging techniques are widely used for rapid, efficient, and non-destructive identification of wheat varieties. However, the interference of noise in hyperspectral images and the underutilization of spatial information by most methods are two challenging issues in identifying wheat varieties. In this paper, we present a new approach called Cost-sensitive K-Nearest Neighbor using Hyperspectral imaging (CSKNN) to address these issues. First, we fuse 128 bands acquired by hyperspectral imaging equipment to obtain hyperspectral images of wheat grains, and employ a central regionalization strategy to extract the region of interest. We then use a smoothing denoising strategy to remove noise from the hyperspectral images and improve the saliency of the object grains. Furthermore, we consider the characteristics of different bands and use linear discriminant analysis to compress features, reducing intra-class differences and increasing inter-class differences. Finally, we propose a Cost-sensitive KNN for training and testing of wheat varieties. Our experiments on different strains and varieties of wheat datasets in the same region show that our CSKNN achieves high classification accuracies of 98.09% and 97.45%, outperforming state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Tensor based low rank representation of hyperspectral images for wheat seeds varieties identification.
- Author
-
An, Jinliang, Zhang, Chen, Zhou, Ling, Jin, Songlin, Zhang, Ziyang, Zhao, Wenyi, Pan, Xipeng, and Zhang, Weidong
- Subjects
- *
WHEAT seeds , *WHEAT , *IMAGE representation , *SPECTRAL imaging , *FEATURE extraction , *IDENTIFICATION , *CALCULUS of tensors , *SEEDS - Abstract
Hyperspectral image (HSI) based methods are widely used in identifying seeds varieties with high accuracy. However, the excessive spectral bands in HSI may contain redundant information and degrade the model performance. To address this challenge, we propose a novel feature extraction method called low rank tensor approximation (LRTA). Unlike traditional methods, LRTA extracts joint discriminative information of all wheat seeds from hyperspectral images in a 3-order tensor form, preserving intrinsic information. Our model has three key steps: extracting the region of interest from hyperspectral images and presenting average spectral information in tensor form, using LRTA to extract jointly discriminative information, and feeding this information into a classifier to identify seeds varieties. Experiments on our proposed dataset show that the proposed method has 4% improved to the conventional methods on average. Our method shows promise for improving the accuracy of seeds identification while reducing the dimensionality of HSI. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. MCANet: Multi-channel attention network with multi-color space encoder for underwater image classification.
- Author
-
Li, Guohou, Wang, Fangyuan, Zhou, Ling, Jin, Songlin, Xie, Xiwang, Ding, Chang, Pan, Xipeng, and Zhang, Weidong
- Subjects
- *
IMAGE recognition (Computer vision) , *COLOR space , *IMAGE intensifiers , *ATTENTION , *VIDEO coding - Abstract
Underwater images suffer from multiple quality degradation issues due to the complex underwater environment. Unfortunately, currently available underwater image enhancement methods do not enhance all degradation types of underwater images, so it is important to study underwater image enhancement methods dedicated to specific degradation categories for underwater vision tasks. To tackle the above issues, we propose a Multi-channel Attention Network (MCANet) with Multi-color Space Encoder, which is instructive for designing dedicated classes of underwater image enhancement methods. Specifically, first, we use a multi-color space encoding method to fully integrate the advantages of features in different color spaces. Then, we obtain global and local deeper features of images in multiple dimensions through a multi-channel attention path aggregation strategy. Finally, we form the MCANet network architecture by embedding and stacking multi-channel attention modules to continuously strengthen the perception of image features. The experimental results show that the classification accuracy of MCANet reaches 98.737%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Feature selection and cascade dimensionality reduction for self-supervised visual representation learning.
- Author
-
Qu, Peixin, Jin, Songlin, Tian, Yongqin, Zhou, Ling, Zheng, Ying, Zhang, Weidong, Xu, Yibo, Pan, Xipeng, and Zhao, Wenyi
- Subjects
- *
FEATURE selection , *VISUAL learning , *SECURE Sockets Layer (Computer network protocol) , *DEEP learning - Abstract
Self-supervised visual representation learning focuses on capturing comprehensive features via exploiting the unlabeled datasets. However, existing contrastive learning based SSL frameworks are subjected to higher computational consumption and unsatisfactory performance. To handle these issues, we present a novel single-branch SSL method that incorporates an adaptive feature selection and activation module and a progressive cascade dimensionality reduction module, called APNet. Specifically, our method first fully exploits the unlabeled datasets and extracts intra- and inter-image information via introducing montage image. In addition, a novel adaptive feature selection and activation module is designed to generate the most comprehensive features. Besides, a progressive cascade dimensionality reduction module is proposed to capture the most representative features from latent vectors through cascade dimensionality increasing–decreasing operations. Extensive experiments have demonstrated the robustness and effectiveness of APNet. Specifically, APNet exceeds MoCo-v3 by 3.1% on the ImageNet-100 dataset, and consumes only half of the calculation. Code is available at https://github.com/AI-TYQ/APNet. • We embed feature selection and dimensionality reduction modules into SSL framework. • We design an attention module that selects and activates representative features. • We introduce a dimensionality reduction module to retain discriminative features. • We demonstrate the advantages of the above contributions through experiments. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. MCI-Net: Multi-scale context integrated network for liver CT image segmentation.
- Author
-
Xie, Xiwang, Pan, Xipeng, Shao, Feng, Zhang, Weidong, and An, Jubai
- Subjects
- *
COMPUTED tomography , *LIVER , *COMPUTER-assisted image analysis (Medicine) , *NEURAL circuitry - Abstract
Owing to the various object scales and high similarity with the surrounding organs (e.g., kidney, stomach, and spleen), it is difficult to accurately segment the liver region from the abdominal computed tomography images. In this study, we propose a multi-scale context integration network called MCI-Net for liver image segmentation. Specifically, we first design a simplified residual module to prevent network degradation. Given the scale variability of objects, we propose a multi-scale context extraction module by combining four cascaded branches of hybrid dilated convolutions to capture broader and deeper features. In addition, we introduce an external attention mechanism based on two external, learnable and shared memory units, which helps to perceive the most discriminative information and suppress redundant features. Finally, we provide a boundary correction block to further improve the localization ability of boundary information. Extensive experiments on two liver CT image benchmark datasets qualitatively and quantitatively illustrate that our method is effective in improving liver segmentation accuracy and outperforms several state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.