7 results on '"Lu, Huimin"'
Search Results
2. MCE: Medical Cognition Embedded in 3D MRI feature extraction for advancing glioma staging.
- Author
-
Xue, Han, Lu, Huimin, Wang, Yilong, Li, Niya, and Wang, Guizeng
- Subjects
- *
FEATURE extraction , *MACHINE learning , *DEEP learning , *IMAGE analysis , *GLIOMAS , *MAGNETIC resonance imaging - Abstract
In recent years, various data-driven algorithms have been applied to the classification and staging of brain glioma MRI detection. However, the restricted availability of brain glioma MRI data in purely data-driven deep learning algorithms has presented challenges in extracting high-quality features and capturing their complex patterns. Moreover, the analysis methods designed for 2D data necessitate the selection of ideal tumor image slices, which does not align with practical clinical scenarios. Our research proposes an novel brain glioma staging model, Medical Cognition Embedded (MCE) model for 3D data. This model embeds knowledge characteristics into data-driven approaches to enhance the quality of feature extraction. Approach includes the following key components: (1) Deep feature extraction, drawing upon the imaging technical characteristics of different MRI sequences, has led to the design of two methods at both the algorithmic and strategic levels to mimic the learning process of real image interpretation by medical professionals during film reading; (2) We conduct an extensive Radiomics feature extraction, capturing relevant features such as texture, morphology, and grayscale distribution; (3) By referencing key points in radiological diagnosis, Radiomics feature experimental results, and the imaging characteristics of various MRI sequences, we manually create diagnostic features (Diag-Features). The efficacy of proposed methodology is rigorously evaluated on the publicly available BraTS2018 and BraTS2020 datasets. Comparing it to most well-known purely data-driven models, our method achieved higher accuracy, recall, and precision, reaching 96.14%, 93.4%, 97.06%, and 97.57%, 92.80%, 95.96%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. ViT-Cap: A Novel Vision Transformer-Based Capsule Network Model for Finger Vein Recognition.
- Author
-
Li, Yupeng, Lu, Huimin, Wang, Yifan, Gao, Ruoran, and Zhao, Chengcheng
- Subjects
CAPSULE neural networks ,FINGERS ,VEINS ,ERROR rates ,COMPUTER vision - Abstract
Finger vein recognition has been widely studied due to its advantages, such as high security, convenience, and living body recognition. At present, the performance of the most advanced finger vein recognition methods largely depends on the quality of finger vein images. However, when collecting finger vein images, due to the possible deviation of finger position, ambient lighting and other factors, the quality of the captured images is often relatively low, which directly affects the performance of finger vein recognition. In this study, we proposed a new model for finger vein recognition that combined the vision transformer architecture with the capsule network (ViT-Cap). The model can explore finger vein image information based on global and local attention and selectively focus on the important finger vein feature information. First, we split-finger vein images into patches and then linearly embedded each of the patches. Second, the resulting vector sequence was fed into a transformer encoder to extract the finger vein features. Third, the feature vectors generated by the vision transformer module were fed into the capsule module for further training. We tested the proposed method on four publicly available finger vein databases. Experimental results showed that the average recognition accuracy of the algorithm based on the proposed model was above 96%, which was better than the original vision transformer, capsule network, and other advanced finger vein recognition algorithms. Moreover, the equal error rate (EER) of our model achieved state-of-the-art performance, especially reaching less than 0.3% under the test of FV-USM datasets which proved the effectiveness and reliability of the proposed model in finger vein recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. PAMSNet: A medical image segmentation network based on spatial pyramid and attention mechanism.
- Author
-
Feng, Yuncong, Zhu, Xiaoyan, Zhang, Xiaoli, Li, Yang, and Lu, Huimin
- Subjects
COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,FEATURE extraction ,IMAGE analysis ,PYRAMIDS ,IMAGE segmentation - Abstract
The image segmentation of diseases can help clinical diagnosis and treatment in medical image analysis. Due to the complexity of lesion features (e.g., size, location, and morphology) and the high similarity between the background and the target area in medical images, semantic features are difficult to extract completely. To tackle these problems, we propose a novel medical image segmentation network PAMSNet based on the spatial pyramid and attention mechanism. By using efficient pyramid attention and channel spatial attention modules, the proposed method fuses the extracted multi-scale spatial information with the local features extracted by the encoder to supplement the image details. In addition, the Spatial Pyramid-Coordinate Attention (SPCA) module is introduced in the bottleneck layer to obtain larger receptive field information and enhance feature extraction. We conducted qualitative and quantitative evaluations on four public datasets, including ISIC2018, Lung segmentation, Kvasir-SEG, and ISLES2022. The segmentation accuracy of DSC was 87.86%, 98.18%, 82.43%, and 87.37%, respectively. The ablation study of each part of PAMSNet proves the validity of each component, and the comparison with state-of-the-art methods on different indicators proves the predominance of the network. • PAEMNet extracts detailed spatial info effectively in encoding. • It emphasizes edge and detail features for better segmentation. • The model improves performance significantly on lesion segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Aspect-Based Sentiment Analysis of User Reviews in 5G Networks.
- Author
-
Zhang, Yin, Lu, Huimin, Jiang, Chi, Li, Xin, and Tian, Xinliang
- Subjects
- *
DEEP learning , *SENTIMENT analysis , *5G networks , *MACHINE learning , *CONCEPT learning , *ALGORITHMS - Abstract
Aspect-based sentiment analysis can help consumers provide clear and objective sentiment recommendations through massive amounts of data and is conducive to overcoming ambiguous human weaknesses in subjective judgments. However, the robustness and accuracy of existing sentiment analysis methods must still be improved. In this article, deep learning and machine learning techniques are combined to construct a sentiment analysis model based on ensemble learning ideas. Furthermore, the proposed model is applied to a sentiment classification for user reviews about restaurants, which are the representative location-based and user-oriented applications in 5G networks. Specifically, a multi-aspect-labeling model is established, and an ensemble aspect-based model is proposed based on the concept of ensemble learning to predict the consumer's true consumption feelings and willingness to consume again, and to improve machine learning based on the developed model. The predictive performance of the algorithm lies within a single domain. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Residual Gabor convolutional network and FV-Mix exponential level data augmentation strategy for finger vein recognition.
- Author
-
Wang, Yifan, Lu, Huimin, Qin, Xiwen, and Guo, Jianwei
- Subjects
- *
FINGERS , *DATA augmentation , *VEINS , *GABOR filters , *DEEP learning - Abstract
Using deep learning to improve the performance of finger vein recognition has become a part of mainstream research. Currently, the performance of finger vein recognition systems is limited by insufficient finger vein training samples, which leads to insufficient feature learning and weak model generalization. To solve the above problems, first, we propose a simple and effective finger vein data augmentation strategy named FV-Mix, which uses fine finger vein image region of interest (ROI) for grayscale normalization and linear mixing. It can accomplish exponential-level data augmentation on the training sample, and the augmented data will represent a more complete dataset. Second, a residual Gabor convolutional network (RGCN) is designed for finger vein recognition, which includes a residual Gabor convolutional layer (RGCL) and a dense semantic analysis module (DSAM). The RGCL is designed to replace the shallow convolutional layer in the network by using the characteristics of the Gabor filter to enhance the scale and direction of information in the shallow pattern features. And then, the enhanced deep features are further extracted and analyzed by DSAM, which is used to assist the final model in classifying and recognizing finger vein images. To verify the effectiveness of our work, five publicly available finger vein image datasets are used, and a large number of comparison experiments are designed. The proposed FV-Mix strategy, RGCL module, and DSAM module were fully validated. The experimental results show that on these five datasets, the average recognition accuracy and equal error rate (EER) of the proposed RGCN are 99.22% and 0.188%, respectively, achieves competitive performance compared with current state-of-the-art work. • Novel residual Gabor convolutional network improves recognition system performance. • Novel residual Gabor convolutional layer enhances finger vein pattern features. • Novel dense semantic analysis module assists with model classification. • Exponential level augmentation for finger vein image samples. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Incremental learning for exudate and hemorrhage segmentation on fundus images.
- Author
-
He, Wanji, Wang, Xin, Wang, Lin, Huang, Yelin, Yang, Zhiwen, Yao, Xuan, Zhao, Xin, Ju, Lie, Wu, Liao, Wu, Lin, Lu, Huimin, and Ge, Zongyuan
- Subjects
- *
MACHINE learning , *EXUDATES & transudates , *DIABETIC retinopathy , *KNOWLEDGE transfer , *HEMORRHAGE , *DEEP learning , *IMAGE segmentation - Abstract
Deep-learning-based segmentation methods have shown great success across many medical image applications. However, the custom training paradigms suffer from a well-known constraint of the requirement of pixel-wise annotations, which is labor-intensive, especially when they are required to learn new classes incrementally. Contemporary incremental learning focuses on dealing with catastrophic forgetting in image classification and object detection. However, this work aims to promote the performance of the current model to learn new classes with the help of the previous model in the context of incremental learning of instance segmentation. It enormously benefits the current model when the labeled data is limited because of the high labor intensity of manual labeling. In this paper, on the Diabetic Retinopathy (DR) lesion segmentation problem, a novel incremental segmentation paradigm is proposed to distill the knowledge of the previous model to improve the current model. Remarkably, we propose various approaches working on the class-based alignment of the probability maps of the current and the previous model, accounting for the difference between the background classes of the two models. The experimental evaluation of DR lesion segmentation shows the effectiveness of the proposed approaches. • It proposed a scheme for incremental segmentation. • Using knowledge distillation to transfer the knowledge from model to model. • Improve the segmentation performance using incremental learning. • Proposed a probability-map alignment scheme to integrate two different class maps. • The proposed method is generalized to any new classes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.