8 results on '"Zhang, Jiong"'
Search Results
2. Automatic choroid layer segmentation in OCT images via context efficient adaptive network.
- Author
-
Yan, Qifeng, Gu, Yuanyuan, Zhao, Jinyu, Wu, Wenjun, Ma, Yuhui, Liu, Jiang, Zhang, Jiong, and Zhao, Yitian
- Subjects
IMAGE segmentation ,OPTICAL coherence tomography ,MACULAR degeneration ,CHOROID ,SCLERA ,IMAGE analysis - Abstract
Optical Coherence Tomography (OCT) is a non-invasive and newly-developing technique to image human retina and choroid. Many ocular diseases such as pathological myopia and Age-related Macular Degeneration (AMD) are related to the morphological changes of the choroid. Consequently, the automatic choroid segmentation becomes an important step to the examination and diagnosis of those choroid-related diseases. However, there are still challenges such as the inseparability of the histogram between the choroid and sclera boundaries and the inconsistency of the choroid layer texture and intensity. To solve those challenges, we propose a Context Efficient Adaptive network (CEA-Net) that includes a module of Efficient Channel Attention (ECA), a novel block called adaptive morphological refinement (AMR) and a new loss function called Choroidal Convex Boundary (CCB) regularization. The Adaptive Morphological Refinement (AMR) block is designed to avoid the segmentation of discrete subtle objects in choroid. The new Choroidal Convex Boundary (CCB) loss is proposed to refine the segmented choroidal boundaries. The proposed method is applied to two OCT datasets acquired from two different manufacturers respectively in order to evaluate its effectiveness. The results show that the AMR block and CCB loss function enable the deep network to obtain more accurate choroid segmentations. In addition, for the first time in the field of medical image analysis, we construct a dedicated OCT choroid layer segmentation dataset (OCHID), which consists of 640 OCT images with choroidal boundaries annotations. This dataset is available for public use to assist community researchers in their research on related topics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Retinal Structure Detection in OCTA Image via Voting-Based Multitask Learning.
- Author
-
Hao, Jinkui, Shen, Ting, Zhu, Xueli, Liu, Yonghuai, Behera, Ardhendu, Zhang, Dan, Chen, Bang, Liu, Jiang, Zhang, Jiong, and Zhao, Yitian
- Subjects
FUNDUS oculi ,OPTICAL coherence tomography ,RETINAL blood vessels ,COLOR photography ,HOUGH transforms ,SOURCE code - Abstract
Automated detection of retinal structures, such as retinal vessels (RV), the foveal avascular zone (FAZ), and retinal vascular junctions (RVJ), are of great importance for understanding diseases of the eye and clinical decision-making. In this paper, we propose a novel Voting-based Adaptive Feature Fusion multi-task network (VAFF-Net) for joint segmentation, detection, and classification of RV, FAZ, and RVJ in optical coherence tomography angiography (OCTA). A task-specific voting gate module is proposed to adaptively extract and fuse different features for specific tasks at two levels: features at different spatial positions from a single encoder, and features from multiple encoders. In particular, since the complexity of the microvasculature in OCTA images makes simultaneous precise localization and classification of retinal vascular junctions into bifurcation/crossing a challenging task, we specifically design a task head by combining the heatmap regression and grid classification. We take advantage of three different en face angiograms from various retinal layers, rather than following existing methods that use only a single en face. We carry out extensive experiments on three OCTA datasets acquired using different imaging devices, and the results demonstrate that the proposed method performs on the whole better than either the state-of-the-art single-purpose methods or existing multi-task learning solutions. We also demonstrate that our multi-task learning method generalizes across other imaging modalities, such as color fundus photography, and may potentially be used as a general multi-task learning tool. We also construct three datasets for multiple structure detection, and part of these datasets with the source code and evaluation benchmark have been released for public access. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. DeepGrading: Deep Learning Grading of Corneal Nerve Tortuosity.
- Author
-
Mou, Lei, Qi, Hong, Liu, Yonghuai, Zheng, Yalin, Matthew, Peter, Su, Pan, Liu, Jiang, Zhang, Jiong, and Zhao, Yitian
- Subjects
DEEP learning ,TORTUOSITY ,CORNEA ,NERVE fibers ,NERVES ,FEATURE extraction - Abstract
Accurate estimation and quantification of the corneal nerve fiber tortuosity in corneal confocal microscopy (CCM) is of great importance for disease understanding and clinical decision-making. However, the grading of corneal nerve tortuosity remains a great challenge due to the lack of agreements on the definition and quantification of tortuosity. In this paper, we propose a fully automated deep learning method that performs image-level tortuosity grading of corneal nerves, which is based on CCM images and segmented corneal nerves to further improve the grading accuracy with interpretability principles. The proposed method consists of two stages: 1) A pre-trained feature extraction backbone over ImageNet is fine-tuned with a proposed novel bilinear attention (BA) module for the prediction of the regions of interest (ROIs) and coarse grading of the image. The BA module enhances the ability of the network to model long-range dependencies and global contexts of nerve fibers by capturing second-order statistics of high-level features. 2) An auxiliary tortuosity grading network (AuxNet) is proposed to obtain an auxiliary grading over the identified ROIs, enabling the coarse and additional gradings to be finally fused together for more accurate final results. The experimental results show that our method surpasses existing methods in tortuosity grading, and achieves an overall accuracy of 85.64% in four-level classification. We also validate it over a clinical dataset, and the statistical analysis demonstrates a significant difference of tortuosity levels between healthy control and diabetes group. We have released a dataset with 1500 CCM images and their manual annotations of four tortuosity levels for public access. The code is available at: https://github.com/iMED-Lab/TortuosityGrading. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Multi-Scale Pathological Fluid Segmentation in OCT With a Novel Curvature Loss in Convolutional Neural Network.
- Author
-
Xing, Gang, Chen, Li, Wang, Hualin, Zhang, Jiong, Sun, Dongke, Xu, Feng, Lei, Jianqin, and Xu, Xiayu
- Subjects
CONVOLUTIONAL neural networks ,MACULA lutea ,MACULAR degeneration ,OPTICAL coherence tomography ,MACULAR edema ,CURVATURE - Abstract
The segmentation of pathological fluid lesions in optical coherence tomography (OCT), including intraretinal fluid, subretinal fluid, and pigment epithelial detachment, is of great importance for the diagnosis and treatment of various eye diseases such as neovascular age-related macular degeneration and diabetic macular edema. Although significant progress has been achieved with the rapid development of fully convolutional neural networks (FCN) in recent years, some important issues remain unsolved. First, pathological fluid lesions in OCT show large variations in location, size, and shape, imposing challenges on the design of FCN architecture. Second, fluid lesions should be continuous regions without holes inside. But the current architectures lack the capability to preserve the shape prior information. In this study, we introduce an FCN architecture for the simultaneous segmentation of three types of pathological fluid lesions in OCT. First, attention gate and spatial pyramid pooling modules are employed to improve the ability of the network to extract multi-scale objects. Then, we introduce a novel curvature regularization term in the loss function to incorporate shape prior information. The proposed method was extensively evaluated on public and clinical datasets with significantly improved performance compared with the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model.
- Author
-
Ma, Yuhui, Hao, Huaying, Xie, Jianyang, Fu, Huazhu, Zhang, Jiong, Yang, Jianlong, Wang, Zhen, Liu, Jiang, Zheng, Yalin, and Zhao, Yitian
- Subjects
RETINAL blood vessels ,OPTICAL coherence tomography ,DEEP learning ,IMAGE analysis ,RETINAL imaging - Abstract
Optical Coherence Tomography Angiography (OCTA) is a non-invasive imaging technique that has been increasingly used to image the retinal vasculature at capillary level resolution. However, automated segmentation of retinal vessels in OCTA has been under-studied due to various challenges such as low capillary visibility and high vessel complexity, despite its significance in understanding many vision-related diseases. In addition, there is no publicly available OCTA dataset with manually graded vessels for training and validation of segmentation algorithms. To address these issues, for the first time in the field of retinal image analysis we construct a dedicated Retinal OCTA SEgmentation dataset (ROSE), which consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level. This dataset with the source code has been released for public access to assist researchers in the community in undertaking research in related topics. Secondly, we introduce a novel split-based coarse-to-fine vessel segmentation network for OCTA images (OCTA-Net), with the ability to detect thick and thin vessels separately. In the OCTA-Net, a split-based coarse segmentation module is first utilized to produce a preliminary confidence map of vessels, and a split-based refined segmentation module is then used to optimize the shape/contour of the retinal microvasculature. We perform a thorough evaluation of the state-of-the-art vessel segmentation models and our OCTA-Net on the constructed ROSE dataset. The experimental results demonstrate that our OCTA-Net yields better vessel segmentation performance in OCTA than both traditional and other deep learning methods. In addition, we provide a fractal dimension analysis on the segmented microvasculature, and the statistical analysis demonstrates significant differences between the healthy control and Alzheimer’s Disease group. This consolidates that the analysis of retinal microvasculature may offer a new scheme to study various neurodegenerative diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Robust Retinal Vessel Segmentation via Locally Adaptive Derivative Frames in Orientation Scores.
- Author
-
Zhang, Jiong, Dashtbozorg, Behdad, Bekkers, Erik, Pluim, Josien P. W., Duits, Remco, and ter Haar Romeny, Bart M.
- Subjects
- *
RETINAL blood vessels , *IMAGE segmentation , *LIE groups , *WAVELETS (Mathematics) , *ARTERIAL puncture - Abstract
This paper presents a robust and fully automatic filter-based approach for retinal vessel segmentation. We propose new filters based on 3D rotating frames in so-called orientation scores, which are functions on the Lie-group domain of positions and orientations \mathbb R^2 \rtimes S^1 . By means of a wavelet-type transform, a 2D image is lifted to a 3D orientation score, where elongated structures are disentangled into their corresponding orientation planes. In the lifted domain \mathbb R^2 \rtimes S^1 , vessels are enhanced by means of multi-scale second-order Gaussian derivatives perpendicular to the line structures. More precisely, we use a left-invariant rotating derivative (LID) frame, and a locally adaptive derivative (LAD) frame. The LAD is adaptive to the local line structures and is found by eigensystem analysis of the left-invariant Hessian matrix (computed with the LID). After multi-scale filtering via the LID or LAD in the orientation score domain, the results are projected back to the 2D image plane giving us the enhanced vessels. Then a binary segmentation is obtained through thresholding. The proposed methods are validated on six retinal image datasets with different image types, on which competitive segmentation performances are achieved. In particular, the proposed algorithm of applying the LAD filter on orientation scores (LAD-OS) outperforms most of the state-of-the-art methods. The LAD-OS is capable of dealing with typically difficult cases like crossings, central arterial reflex, closely parallel and tiny vessels. The high computational speed of the proposed methods allows processing of large datasets in a screening setting. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
8. Anomaly segmentation in retinal images with poisson-blending data augmentation.
- Author
-
Wang, Hualin, Zhou, Yuhong, Zhang, Jiong, Lei, Jianqin, Sun, Dongke, Xu, Feng, and Xu, Xiayu
- Subjects
- *
DATA augmentation , *RETINAL imaging , *IMAGE segmentation , *CONVOLUTIONAL neural networks , *DIABETIC retinopathy - Abstract
• We propose a novel Poisson-blending data augmentation to generate large-scale task-specific training data. • We propose a CNN architecture for the simultaneous segmentation of four types of DR lesions. • The method was extensively validated through ablation and comparison studies on two public datasets. • The results indicated that the proposed method outperformed the state-of-the-art methods. Diabetic retinopathy (DR) is one of the most important complications of diabetes. Accurate segmentation of DR lesions is of great importance for the early diagnosis of DR. However, simultaneous segmentation of multi-type DR lesions is technically challenging because of 1) the lack of pixel-level annotations and 2) the large diversity between different types of DR lesions. In this study, first, we propose a novel Poisson-blending data augmentation (PBDA) algorithm to generate synthetic images, which can be easily utilized to expand the existing training data for lesion segmentation. We perform extensive experiments to recognize the important attributes in the PBDA algorithm. We show that position constraints are of great importance and that the synthesis density of one type of lesion has a joint influence on the segmentation of other types of lesions. Second, we propose a convolutional neural network architecture, named DSR-U-Net++ (i.e., DC-SC residual U-Net++), for the simultaneous segmentation of multi-type DR lesions. Ablation studies showed that the mean area under precision recall curve (AUPR) for all four types of lesions increased by >5% with PBDA. The proposed DSR-U-Net++ with PBDA outperformed the state-of-the-art methods by 1.7%-9.9% on the Indian Diabetic Retinopathy Image Dataset (IDRiD) and 67.3% on the e-ophtha dataset with respect to mean AUPR. The developed method would be an efficient tool to generate large-scale task-specific training data for other medical anomaly segmentation tasks. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.