9 results on '"Han, Xian-Hua"'
Search Results
2. IDH Mutation Status Prediction by Modality-Self Attention Network
- Author
-
Zhang, Xinran, Iwamoto, Yutaro, Cheng, Jingliang, Bai, Jie, Zhao, Guohua, Han, Xian-Hua, Chen, Yen-Wei, Howlett, Robert J., Series Editor, Jain, Lakhmi C., Series Editor, Chen, Yen-Wei, editor, and Tanaka, Satoshi, editor
- Published
- 2021
- Full Text
- View/download PDF
3. Detection of Liver Tumor Candidates from CT Images Using Deep Convolutional Neural Networks
- Author
-
Todoroki, Yoshihiro, Han, Xian-Hua, Iwamoto, Yutaro, Lin, Lanfen, Hu, Hongjie, Chen, Yen-Wei, Howlett, Robert James, Series editor, Jain, Lakhmi C., Series editor, Chen, Yen-Wei, editor, Tanaka, Satoshi, editor, and Howlett, Robert J., editor
- Published
- 2018
- Full Text
- View/download PDF
4. Hyperspectral image super resolution using deep internal and self‐supervised learning.
- Author
-
Liu, Zhe and Han, Xian‐Hua
- Subjects
DEEP learning ,ARTIFICIAL neural networks ,ONLINE education ,DATABASES ,NETWORK performance ,COMPUTER vision - Abstract
By automatically learning the priors embedded in images with powerful modelling capabilities, deep learning‐based algorithms have recently made considerable progress in reconstructing the high‐resolution hyperspectral (HR‐HS) image. With previously collected large‐amount of external data, these methods are intuitively realised under the full supervision of the ground‐truth data. Thus, the database construction in merging the low‐resolution (LR) HS (LR‐HS) and HR multispectral (MS) or RGB image research paradigm, commonly named as HSI SR, requires collecting corresponding training triplets: HR‐MS (RGB), LR‐HS and HR‐HS image simultaneously, and often faces difficulties in reality. The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super‐resolved performance to the real images captured under diverse environments. To handle the above‐mentioned limitations, the authors propose to leverage the deep internal and self‐supervised learning to solve the HSI SR problem. The authors advocate that it is possible to train a specific CNN model at test time, called as deep internal learning (DIL), by on‐line preparing the training triplet samples from the observed LR‐HS/HR‐MS (or RGB) images and the down‐sampled LR‐HS version. However, the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors, which would result in limited reconstruction performance. To solve this problem, the authors further exploit deep self‐supervised learning (DSL) by considering the observations as the unlabelled training samples. Specifically, the degradation modules inside the network were elaborated to realise the spatial and spectral down‐sampling procedures for transforming the generated HR‐HS estimation to the high‐resolution RGB/LR‐HS approximation, and then the reconstruction errors of the observations were formulated for measuring the network modelling performance. By consolidating the DIL and DSL into a unified deep framework, the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per observation. To verify the effectiveness of the proposed approach, extensive experiments have been conducted on two benchmark HS datasets, including the CAVE and Harvard datasets, and demonstrate the great performance gain of the proposed method over the state‐of‐the‐art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A Boundary-Enhanced Liver Segmentation Network for Multi-Phase CT Images with Unsupervised Domain Adaptation.
- Author
-
Ananda, Swathi, Jain, Rahul Kumar, Li, Yinhao, Iwamoto, Yutaro, Han, Xian-Hua, Kanasaki, Shuzo, Hu, Hongjie, and Chen, Yen-Wei
- Subjects
COMPUTED tomography ,IMAGE segmentation ,LIVER tumors ,LABOR time ,DIAGNOSIS ,PHYSIOLOGICAL adaptation - Abstract
Multi-phase computed tomography (CT) images have gained significant popularity in the diagnosis of hepatic disease. There are several challenges in the liver segmentation of multi-phase CT images. (1) Annotation: due to the distinct contrast enhancements observed in different phases (i.e., each phase is considered a different domain), annotating all phase images in multi-phase CT images for liver or tumor segmentation is a task that consumes substantial time and labor resources. (2) Poor contrast: some phase images may have poor contrast, making it difficult to distinguish the liver boundary. In this paper, we propose a boundary-enhanced liver segmentation network for multi-phase CT images with unsupervised domain adaptation. The first contribution is that we propose DD-UDA, a dual discriminator-based unsupervised domain adaptation, for liver segmentation on multi-phase images without multi-phase annotations, effectively tackling the annotation problem. To improve accuracy by reducing distribution differences between the source and target domains, we perform domain adaptation at two levels by employing two discriminators, one at the feature level and the other at the output level. The second contribution is that we introduce an additional boundary-enhanced decoder to the encoder–decoder backbone segmentation network to effectively recognize the boundary region, thereby addressing the problem of poor contrast. In our study, we employ the public LiTS dataset as the source domain and our private MPCT-FLLs dataset as the target domain. The experimental findings validate the efficacy of our proposed methods, producing substantially improved results when tested on each phase of the multi-phase CT image even without the multi-phase annotations. As evaluated on the MPCT-FLLs dataset, the existing baseline (UDA) method achieved IoU scores of 0.785, 0.796, and 0.772 for the PV, ART, and NC phases, respectively, while our proposed approach exhibited superior performance, surpassing both the baseline and other state-of-the-art methods. Notably, our method achieved remarkable IoU scores of 0.823, 0.811, and 0.800 for the PV, ART, and NC phases, respectively, emphasizing its effectiveness in achieving accurate image segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Zero-Shot Blind Learning for Single-Image Super-Resolution.
- Author
-
Yamawaki, Kazuhiro and Han, Xian-Hua
- Subjects
- *
DEEP learning , *CONVOLUTIONAL neural networks , *ERROR functions , *APPROXIMATION error , *BLOCK designs , *ARTIFICIAL neural networks - Abstract
Deep convolutional neural networks (DCNNs) have manifested significant performance gains for single-image super-resolution (SISR) in the past few years. Most of the existing methods are generally implemented in a fully supervised way using large-scale training samples and only learn the SR models restricted to specific data. Thus, the adaptation of these models to real low-resolution (LR) images captured under uncontrolled imaging conditions usually leads to poor SR results. This study proposes a zero-shot blind SR framework via leveraging the power of deep learning, but without the requirement of the prior training using predefined imaged samples. It is well known that there are two unknown data: the underlying target high-resolution (HR) images and the degradation operations in the imaging procedure hidden in the observed LR images. Taking these in mind, we specifically employed two deep networks for respectively modeling the priors of both the target HR image and its corresponding degradation kernel and designed a degradation block to realize the observation procedure of the LR image. Via formulating the loss function as the approximation error of the observed LR image, we established a completely blind end-to-end zero-shot learning framework for simultaneously predicting the target HR image and the degradation kernel without any external data. In particular, we adopted a multi-scale encoder–decoder subnet to serve as the image prior learning network, a simple fully connected subnet to serve as the kernel prior learning network, and a specific depthwise convolutional block to implement the degradation procedure. We conducted extensive experiments on several benchmark datasets and manifested the great superiority and high generalization of our method over both SOTA supervised and unsupervised SR methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. VesselNet: A deep convolutional neural network with multi pathways for robust hepatic vessel segmentation.
- Author
-
Kitrungrotsakul, Titinunt, Han, Xian-Hua, Iwamoto, Yutaro, Lin, Lanfen, Foruzan, Amir Hossein, Xiong, Wei, and Chen, Yen-Wei
- Subjects
- *
ARTIFICIAL neural networks , *ANATOMICAL planes , *DEEP learning , *IMAGING systems , *DIAGNOSTIC imaging - Abstract
Extraction or segmentation of organ vessels is an important task for surgical planning and computer-aided diagnoses. This is a challenging task due to the extremely small size of the vessel structure, low SNR, and varying contrast in medical image data. We propose an automatic and robust vessel segmentation approach that uses a multi-pathways deep learning network. The proposed method trains a deep network for binary classification based on extracted training patches on three planes (sagittal, coronal, and transverse planes) centered on the focused voxels. Thus, it is expected to provide a more reliable recognition performance by exploring the 3D structure. Furthermore, due to the large variety of medical data device values, we transform a raw medical image into a probability map as input to the network. Then, we extract vessels based on the proposed network, which is robust and sufficiently general to handle images with different contrast obtained by various imaging systems. The proposed deep network provides a vessel probability map for voxels in the target medical data, which is used in a post-process to generate the final segmentation result. To validate the effectiveness and efficiency of the proposed method, we conducted experiments with 20 data (public datasets) with different contrast levels and different device value ranges. The results demonstrate impressive performance in comparison with the state-of-the-art methods. We propose the first 3D liver vessel segmentation network using deep learning. Using a multi-pathways network, segmentation results can be improved, and the probability map as input is robust against intensity changes in clinical data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Enhanced deep bottleneck transformer model for skin lesion classification.
- Author
-
Nakai, Katsuhiro, Chen, Yen-Wei, and Han, Xian-Hua
- Subjects
DEEP learning ,AUTOMATIC classification ,MELANOMA ,SKIN imaging ,SKIN cancer ,CLASSIFICATION - Abstract
Skin cancer is the most common cancer worldwide, and therein the malignant melanoma may lead to less than 5-year life expectancy. Via early-stage detection and recognition, even the deadliest melanoma can be cured to greatly increase the patient's survival rate. Recently dermoscopy imaging is capable of capturing high-resolution magnified images of the infected skin region to automatic lesion classification, and deep learning network has been witnessed great potential of accurately recognizing different types of skin lesions. This study aims to exploit a novel deep model to enhance the skin lesion recognition performance. In spite of the remarkable progress, the existing deep network based methods naively deploy the proposed network architectures in generic image classification to the skin lesion classification, and there has still large space for performance improvement. This study presents an enhanced deep bottleneck transformer model, which incorporates self-attention to model the global correlation of the extracted features from the conventional deep models, for boosting the skin lesion performance. Specifically, we exploit an enhanced transformer module via incorporating a dual position encoding module to integrate encoded position vector on both key and query vectors for balance learning. By replacing the bottleneck spatial convolutions of the late-stage blocks in the baseline deep networks with the enhanced module, we construct a novel deep skin lesion classification model to lift the skin lesion classification performance. We conduct extensive experiments on two benchmark skin lesion datasets: ISIC2017 and HAM10000 to verify the recognition performance of different deep models. The three quantitative metrics of accuracy, sensitivity and specificity on the ISIC2017 dataset with our method reach to 92.1%, 90.1% and 91.9%, respectively, which manifests very good balance result between the sensitivity and specificity, while the results on the accuracy and precision for the HAM10000 dataset are 95.84% and 96.1%. Results on both datasets have demonstrated that our proposed model can achieve superior performance over the baseline models as well as the state-of-the-art methods. This superior results using the incorporated model of the transformer of convolution module would inspire further research on the wide application of the transformer-based block for the real scenario without large-scale dataset. [Display omitted] • An enhanced bottleneck transformer model for exploiting both local and global interaction is proposed. • A dual position encoding block is implemented to construct an enhanced transformer module. • The enhanced transformer module is incorporated with the benchmark CNN models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Blind Image Super Resolution Using Deep Unsupervised Learning.
- Author
-
Yamawaki, Kazuhiro, Sun, Yongqing, and Han, Xian-Hua
- Subjects
HIGH resolution imaging ,DEEP learning ,PRIOR learning - Abstract
The goal of single image super resolution (SISR) is to recover a high-resolution (HR) image from a low-resolution (LR) image. Deep learning based methods have recently made a remarkable performance gain in terms of both the effectiveness and efficiency for SISR. Most existing methods have to be trained based on large-scale synthetic paired data in a fully supervised manner. With the available HR natural images, the corresponding LR images are usually synthesized with a simple fixed degradation operation, such as bicubic down-sampling. Then, the learned deep models with these training data usually face difficulty to be generalized to real scenarios with unknown and complicated degradation operations. This study exploits a novel blind image super-resolution framework using a deep unsupervised learning network. The proposed method can simultaneously predict the underlying HR image and its specific degradation operation from the observed LR image only without any prior knowledge. The experimental results on three benchmark datasets validate that our proposed method achieves a promising performance under the unknown degradation models. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.