16 results on '"Lu, Huimin"'
Search Results
2. MCE: Medical Cognition Embedded in 3D MRI feature extraction for advancing glioma staging.
- Author
-
Xue, Han, Lu, Huimin, Wang, Yilong, Li, Niya, and Wang, Guizeng
- Subjects
- *
FEATURE extraction , *MACHINE learning , *DEEP learning , *IMAGE analysis , *GLIOMAS , *MAGNETIC resonance imaging - Abstract
In recent years, various data-driven algorithms have been applied to the classification and staging of brain glioma MRI detection. However, the restricted availability of brain glioma MRI data in purely data-driven deep learning algorithms has presented challenges in extracting high-quality features and capturing their complex patterns. Moreover, the analysis methods designed for 2D data necessitate the selection of ideal tumor image slices, which does not align with practical clinical scenarios. Our research proposes an novel brain glioma staging model, Medical Cognition Embedded (MCE) model for 3D data. This model embeds knowledge characteristics into data-driven approaches to enhance the quality of feature extraction. Approach includes the following key components: (1) Deep feature extraction, drawing upon the imaging technical characteristics of different MRI sequences, has led to the design of two methods at both the algorithmic and strategic levels to mimic the learning process of real image interpretation by medical professionals during film reading; (2) We conduct an extensive Radiomics feature extraction, capturing relevant features such as texture, morphology, and grayscale distribution; (3) By referencing key points in radiological diagnosis, Radiomics feature experimental results, and the imaging characteristics of various MRI sequences, we manually create diagnostic features (Diag-Features). The efficacy of proposed methodology is rigorously evaluated on the publicly available BraTS2018 and BraTS2020 datasets. Comparing it to most well-known purely data-driven models, our method achieved higher accuracy, recall, and precision, reaching 96.14%, 93.4%, 97.06%, and 97.57%, 92.80%, 95.96%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. FDCNet: filtering deep convolutional network for marine organism classification
- Author
-
Lu, Huimin, Li, Yujie, Uemura, Tomoki, Ge, Zongyuan, Xu, Xing, He, Li, Serikawa, Seiichi, and Kim, Hyoungseop
- Published
- 2018
- Full Text
- View/download PDF
4. Multiscale Shared Learning for Fault Diagnosis of Rotating Machinery in Transportation Infrastructures.
- Author
-
Chen, Zhe, Tian, Shiqing, Shi, Xiaotao, and Lu, Huimin
- Abstract
Rotating machinery is ubiquitous, and its failures constitute a major cause of the failures of transportation infrastructures. Most fault-diagnosis methods for rotating machinery are based on vibration-signal analysis because vibrations directly reflect the transient regime of machinery elements. This article proposes a novel multiscale shared-learning network (MSSLN) architecture to extract and classify the fault features inherent to multiscale factors of vibration signals. The architecture fuses layer-wise activations with multiscale flows, to enable the network to fully learn the shared representation with consistency across multiscale factors. This characteristic helps MSSLN provide more faithful diagnoses than existing single- and multiscale methods. Experiments on bearing and gearbox datasets are used to evaluate the fault-diagnosis performance of transportation infrastructures. Extensive experimental results and comprehensive analyses demonstrate the superiority of the proposed MSSLN in fault diagnosis for bearings and gearboxes, the two foundational elements in transportation infrastructures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Global-PBNet: A Novel Point Cloud Registration for Autonomous Driving.
- Author
-
Zheng, Yuchao, Li, Yujie, Yang, Shuo, and Lu, Huimin
- Abstract
Registration performs an individual and deciding role in multiple intelligent transport systems. The advancement of deep-learning-based methods enhances the robustness and effectiveness of the preliminary registration stage, although the algorithm will effortlessly fall into local optima when improving the ultimate exactitude. Similarly, traditional method based on optimization has a more reliable performance in terms of precision. However, its performance still counts on the quality of initialization. In order to solve the above problems, we propose a PBNet that combines a point cloud network with a global optimization method. This framework uses the feature information of objects to perform high-precision rough registration and then searches the entire 3D motion space to implement branch-and-bound and iterative nearest point methods. The evaluation results show that PBNet significantly reduce the influence of initial values on registration and has good robustness against noise and outliers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. ViT-Cap: A Novel Vision Transformer-Based Capsule Network Model for Finger Vein Recognition.
- Author
-
Li, Yupeng, Lu, Huimin, Wang, Yifan, Gao, Ruoran, and Zhao, Chengcheng
- Subjects
CAPSULE neural networks ,FINGERS ,VEINS ,ERROR rates ,COMPUTER vision - Abstract
Finger vein recognition has been widely studied due to its advantages, such as high security, convenience, and living body recognition. At present, the performance of the most advanced finger vein recognition methods largely depends on the quality of finger vein images. However, when collecting finger vein images, due to the possible deviation of finger position, ambient lighting and other factors, the quality of the captured images is often relatively low, which directly affects the performance of finger vein recognition. In this study, we proposed a new model for finger vein recognition that combined the vision transformer architecture with the capsule network (ViT-Cap). The model can explore finger vein image information based on global and local attention and selectively focus on the important finger vein feature information. First, we split-finger vein images into patches and then linearly embedded each of the patches. Second, the resulting vector sequence was fed into a transformer encoder to extract the finger vein features. Third, the feature vectors generated by the vision transformer module were fed into the capsule module for further training. We tested the proposed method on four publicly available finger vein databases. Experimental results showed that the average recognition accuracy of the algorithm based on the proposed model was above 96%, which was better than the original vision transformer, capsule network, and other advanced finger vein recognition algorithms. Moreover, the equal error rate (EER) of our model achieved state-of-the-art performance, especially reaching less than 0.3% under the test of FV-USM datasets which proved the effectiveness and reliability of the proposed model in finger vein recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. PAMSNet: A medical image segmentation network based on spatial pyramid and attention mechanism.
- Author
-
Feng, Yuncong, Zhu, Xiaoyan, Zhang, Xiaoli, Li, Yang, and Lu, Huimin
- Subjects
COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,FEATURE extraction ,IMAGE analysis ,PYRAMIDS ,IMAGE segmentation - Abstract
The image segmentation of diseases can help clinical diagnosis and treatment in medical image analysis. Due to the complexity of lesion features (e.g., size, location, and morphology) and the high similarity between the background and the target area in medical images, semantic features are difficult to extract completely. To tackle these problems, we propose a novel medical image segmentation network PAMSNet based on the spatial pyramid and attention mechanism. By using efficient pyramid attention and channel spatial attention modules, the proposed method fuses the extracted multi-scale spatial information with the local features extracted by the encoder to supplement the image details. In addition, the Spatial Pyramid-Coordinate Attention (SPCA) module is introduced in the bottleneck layer to obtain larger receptive field information and enhance feature extraction. We conducted qualitative and quantitative evaluations on four public datasets, including ISIC2018, Lung segmentation, Kvasir-SEG, and ISLES2022. The segmentation accuracy of DSC was 87.86%, 98.18%, 82.43%, and 87.37%, respectively. The ablation study of each part of PAMSNet proves the validity of each component, and the comparison with state-of-the-art methods on different indicators proves the predominance of the network. • PAEMNet extracts detailed spatial info effectively in encoding. • It emphasizes edge and detail features for better segmentation. • The model improves performance significantly on lesion segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Aspect-Based Sentiment Analysis of User Reviews in 5G Networks.
- Author
-
Zhang, Yin, Lu, Huimin, Jiang, Chi, Li, Xin, and Tian, Xinliang
- Subjects
- *
DEEP learning , *SENTIMENT analysis , *5G networks , *MACHINE learning , *CONCEPT learning , *ALGORITHMS - Abstract
Aspect-based sentiment analysis can help consumers provide clear and objective sentiment recommendations through massive amounts of data and is conducive to overcoming ambiguous human weaknesses in subjective judgments. However, the robustness and accuracy of existing sentiment analysis methods must still be improved. In this article, deep learning and machine learning techniques are combined to construct a sentiment analysis model based on ensemble learning ideas. Furthermore, the proposed model is applied to a sentiment classification for user reviews about restaurants, which are the representative location-based and user-oriented applications in 5G networks. Specifically, a multi-aspect-labeling model is established, and an ensemble aspect-based model is proposed based on the concept of ensemble learning to predict the consumer's true consumption feelings and willingness to consume again, and to improve machine learning based on the developed model. The predictive performance of the algorithm lies within a single domain. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Construction of a Hierarchical Feature Enhancement Network and Its Application in Fault Recognition.
- Author
-
Chen, Zhe, Lu, Huimin, Tian, Shiqing, Qiu, Junlin, Kamiya, Tohru, Serikawa, Seiichi, and Xu, Lizhong
- Abstract
Industrial Internet of Things (IIoT) provide significant support for observing and controlling industrial machinery. In this article, a novel hierarchical feature enhancement network (HFEN) is proposed by combining signal processing and representation learning. The signal processing block extracts features with definite physical significance. Then, the representability of the physical features is improved by connecting stacked denoising autoencoders and squeeze-and-excitation networks. A novel two-stream architecture is designed for HFEN to fuse two types of features. Consequently, HFEN can extract features that can be analyzed for physical significance and that are also representative in terms of recognizable patterns. The experimental results prove that the performance of HFEN is satisfactory in terms of accuracy and efficiency when compared to other methods. Finally, this article also aims to demonstrate the potential of a new pairing that fuses the model- and data-driven strategies for IIoT. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. SCCGAN: Style and Characters Inpainting Based on CGAN.
- Author
-
Liu, Ruijun, Wang, Xiangshang, Lu, Huimin, Wu, Zhaohui, Fan, Qian, Li, Shanxi, and Jin, Xin
- Subjects
DEEP learning ,INPAINTING - Abstract
With the development of deep learning technology, many deep learning methods have been applied to font recognition and generation. However, few studies focus on font inpainting problems. This paper is dedicated to repairing damaged fonts based on style to repair damaged fonts in a better way. In this paper, we propose a CGAN (Conditional Generative Adversarial Nets)-based font repair method. This paper uses the content accuracy and style similarity of the repaired image as an evaluation index to evaluate the accuracy of the restored style font. The font content proposed by the paper based on CGAN network repair style is similar with the correct content. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Deep-Sea Organisms Tracking Using Dehazing and Deep Learning.
- Author
-
Lu, Huimin, Uemura, Tomoki, Wang, Dong, Zhu, Jihua, Huang, Zi, and Kim, Hyoungseop
- Subjects
- *
AUTOMATIC tracking , *DEEP learning , *REMOTE submersibles , *TURBIDITY , *SQUIDS , *SHARKS , *HAZE - Abstract
Deep-sea organism automatic tracking has rarely been studied because of a lack of training data. However, it is extremely important for underwater robots to recognize and to predict the behavior of organisms. In this paper, we first develop a method for underwater real-time recognition and tracking of multi-objects, which we call "You Only Look Once: YOLO". This method provides us with a very fast and accurate tracker. At first, we remove the haze, which is caused by the turbidity of the water from a captured image. After that, we apply YOLO to allow recognition and tracking of marine organisms, which include shrimp, squid, crab and shark. The experiments demonstrate that our developed system shows satisfactory performance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. Residual Gabor convolutional network and FV-Mix exponential level data augmentation strategy for finger vein recognition.
- Author
-
Wang, Yifan, Lu, Huimin, Qin, Xiwen, and Guo, Jianwei
- Subjects
- *
FINGERS , *DATA augmentation , *VEINS , *GABOR filters , *DEEP learning - Abstract
Using deep learning to improve the performance of finger vein recognition has become a part of mainstream research. Currently, the performance of finger vein recognition systems is limited by insufficient finger vein training samples, which leads to insufficient feature learning and weak model generalization. To solve the above problems, first, we propose a simple and effective finger vein data augmentation strategy named FV-Mix, which uses fine finger vein image region of interest (ROI) for grayscale normalization and linear mixing. It can accomplish exponential-level data augmentation on the training sample, and the augmented data will represent a more complete dataset. Second, a residual Gabor convolutional network (RGCN) is designed for finger vein recognition, which includes a residual Gabor convolutional layer (RGCL) and a dense semantic analysis module (DSAM). The RGCL is designed to replace the shallow convolutional layer in the network by using the characteristics of the Gabor filter to enhance the scale and direction of information in the shallow pattern features. And then, the enhanced deep features are further extracted and analyzed by DSAM, which is used to assist the final model in classifying and recognizing finger vein images. To verify the effectiveness of our work, five publicly available finger vein image datasets are used, and a large number of comparison experiments are designed. The proposed FV-Mix strategy, RGCL module, and DSAM module were fully validated. The experimental results show that on these five datasets, the average recognition accuracy and equal error rate (EER) of the proposed RGCN are 99.22% and 0.188%, respectively, achieves competitive performance compared with current state-of-the-art work. • Novel residual Gabor convolutional network improves recognition system performance. • Novel residual Gabor convolutional layer enhances finger vein pattern features. • Novel dense semantic analysis module assists with model classification. • Exponential level augmentation for finger vein image samples. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Special issue on cognitive computing for robotic vision.
- Author
-
Lu, Huimin and Guna, Jože
- Subjects
COGNITIVE robotics ,COGNITIVE computing ,COMPUTER vision ,VISION ,ARTIFICIAL intelligence ,ROBOTICS ,DEEP learning - Abstract
The development of cognitive computing will keep cross-fertilizing these research areas. Cognitive computing breaks the boundary between two separate fields, neuroscience and computer science. The selected articles have exceptional diversity in terms of cognitive computing and computer vision techniques and applications. [Extracted from the article]
- Published
- 2021
- Full Text
- View/download PDF
14. Correction to: Deep-Sea Organisms Tracking Using Dehazing and Deep Learning.
- Author
-
Lu, Huimin, Uemura, Tomoki, Wang, Dong, Zhu, Jihua, Huang, Zi, and Kim, Hyoungseop
- Subjects
- *
DEEP learning - Abstract
The original version of this article unfortunately contained a mistake in the Affiliation section. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Incremental learning for exudate and hemorrhage segmentation on fundus images.
- Author
-
He, Wanji, Wang, Xin, Wang, Lin, Huang, Yelin, Yang, Zhiwen, Yao, Xuan, Zhao, Xin, Ju, Lie, Wu, Liao, Wu, Lin, Lu, Huimin, and Ge, Zongyuan
- Subjects
- *
MACHINE learning , *EXUDATES & transudates , *DIABETIC retinopathy , *KNOWLEDGE transfer , *HEMORRHAGE , *DEEP learning , *IMAGE segmentation - Abstract
Deep-learning-based segmentation methods have shown great success across many medical image applications. However, the custom training paradigms suffer from a well-known constraint of the requirement of pixel-wise annotations, which is labor-intensive, especially when they are required to learn new classes incrementally. Contemporary incremental learning focuses on dealing with catastrophic forgetting in image classification and object detection. However, this work aims to promote the performance of the current model to learn new classes with the help of the previous model in the context of incremental learning of instance segmentation. It enormously benefits the current model when the labeled data is limited because of the high labor intensity of manual labeling. In this paper, on the Diabetic Retinopathy (DR) lesion segmentation problem, a novel incremental segmentation paradigm is proposed to distill the knowledge of the previous model to improve the current model. Remarkably, we propose various approaches working on the class-based alignment of the probability maps of the current and the previous model, accounting for the difference between the background classes of the two models. The experimental evaluation of DR lesion segmentation shows the effectiveness of the proposed approaches. • It proposed a scheme for incremental segmentation. • Using knowledge distillation to transfer the knowledge from model to model. • Improve the segmentation performance using incremental learning. • Proposed a probability-map alignment scheme to integrate two different class maps. • The proposed method is generalized to any new classes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. A multi-attention and depthwise separable convolution network for medical image segmentation.
- Author
-
Zhou, Yuxiang, Kang, Xin, Ren, Fuji, Lu, Huimin, Nakagawa, Satoshi, and Shan, Xiao
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *DEEP learning , *IMAGE segmentation , *DIAGNOSTIC imaging , *FEATURE extraction - Abstract
Automatic medical image segmentation method is highly needed to help experts in lesion segmentation. The deep learning technology emerging has profoundly driven the development of medical image segmentation. While U-Net and attention mechanisms are widely utilized in this field, the application of attention, albeit successful in natural scene image segmentation, tends to inflate the number of model parameters and neglects the potential for feature fusion between different convolutional layers. In response to these challenges, we present the Multi-Attention and Depthwise Separable Convolution U-Net (MDSU-Net), designed to enhance feature extraction. The multi-attention aspect of our framework integrates dual attention and attention gates, adeptly capturing rich contextual details and seamlessly fusing features across diverse convolutional layers. Additionally, our encoder integrates a depthwise separable convolution layer, streamlining the model's complexity without sacrificing its efficacy, ensuring versatility across various segmentation tasks. The results demonstrate that our method outperforms state-of-the-art across three diverse medical image datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.