23 results on '"Wufan Chen"'
Search Results
2. Hierarchical-order multimodal interaction fusion network for grading gliomas
- Author
-
Yu Zhang, Wufan Chen, Kangfu Han, and Man He
- Subjects
Modality (human–computer interaction) ,Radiological and Ultrasound Technology ,Receiver operating characteristic ,Brain Neoplasms ,business.industry ,Computer science ,Deep learning ,Glioma ,Image segmentation ,Machine learning ,computer.software_genre ,Magnetic Resonance Imaging ,Cross-validation ,Multimodal interaction ,Multimodal learning ,ROC Curve ,Feature (computer vision) ,Humans ,Radiology, Nuclear Medicine and imaging ,Neural Networks, Computer ,Artificial intelligence ,Neoplasm Grading ,business ,computer - Abstract
Significance. Gliomas are the most common type of primary brain tumors and have different grades. Accurate grading of a glioma is therefore significant for its clinical treatment planning and prognostic assessment with multiple-modality magnetic resonance imaging (MRI).Objective and Approach. In this study, we developed a noninvasive deep-learning method based on multimodal MRI for grading gliomas by focusing on effective multimodal fusion via leveraging collaborative and diverse high-order statistical information. Specifically, a novel high-order multimodal interaction module was designed to promote interactive learning of multimodal knowledge for more efficient fusion. For more powerful feature expression and feature correlation learning, the high-order attention mechanism is embedded in the interaction module for modeling complex and high-order statistical information to enhance the classification capability of the network. Moreover, we applied increasing orders at different levels to hierarchically recalibrate each modality stream through diverse-order attention statistics, thus encouraging all-sided attention knowledge with lesser parameters.Main results. To evaluate the effectiveness of the proposed scheme, extensive experiments were conducted on The Cancer Imaging Archive (TCIA) and Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) datasets with five-fold cross validation to demonstrate that the proposed method can achieve high prediction performance, with area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity values of 95.2%, 94.28%, 95.24%, and 92.00% on the BraTS2017 and 93.50%, 92.86%, 97.14%, and 90.48% on TCIA datasets, respectively.
- Published
- 2021
3. Super-resolution reconstruction of 4D-CT lung data via patch-based low-rank matrix reconstruction
- Author
-
Huafeng Wang, Yu Zhang, Wei Yang, Qianjin Feng, Minghui Zhang, Yueliang Liu, Wufan Chen, and Shiting Fang
- Subjects
Lung Neoplasms ,Mean squared error ,Computer science ,Movement ,Low-rank approximation ,Image processing ,Linear interpolation ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Matrix (mathematics) ,0302 clinical medicine ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Four-Dimensional Computed Tomography ,Radionuclide Imaging ,Radiation treatment planning ,Image resolution ,Simulation ,Shrinkage ,Radiological and Ultrasound Technology ,Superresolution ,Singular value ,030220 oncology & carcinogenesis ,Disease Progression ,Tomography ,Algorithm ,Algorithms ,Interpolation - Abstract
Lung 4D computed tomography (4D-CT), which is a time-resolved CT data acquisition, performs an important role in explicitly including respiratory motion in treatment planning and delivery. However, the radiation dose is usually reduced at the expense of inter-slice spatial resolution to minimize radiation-related health risk. Therefore, resolution enhancement along the superior-inferior direction is necessary. In this paper, a super-resolution (SR) reconstruction method based on a patch low-rank matrix reconstruction is proposed to improve the resolution of lung 4D-CT images. Specifically, a low-rank matrix related to every patch is constructed by using a patch searching strategy. Thereafter, the singular value shrinkage is employed to recover the high-resolution patch under the constraints of the image degradation model. The output high-resolution patches are finally assembled to output the entire image. This method is extensively evaluated using two public data sets. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 9.7%-33.4% and the edge width by 11.4%-24.3%, relative to linear interpolation, back projection (BP) and Zhang et al's algorithm. A new algorithm has been developed to improve the resolution of 4D-CT. In all experiments, the proposed method outperforms various interpolation methods, as well as BP and Zhang et al's method, thus indicating the effectivity and competitiveness of the proposed algorithm.
- Published
- 2017
4. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization
- Author
-
Dong Zeng, Zhaoying Bian, Hua Zhang, Wufan Chen, Qianjin Feng, and Jianhua Ma
- Subjects
Image quality ,Physics::Medical Physics ,Streak ,Regularization (mathematics) ,030218 nuclear medicine & medical imaging ,Motion ,03 medical and health sciences ,0302 clinical medicine ,Organ Motion ,Motion estimation ,Image Processing, Computer-Assisted ,Humans ,Coherence (signal processing) ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Four-Dimensional Computed Tomography ,Mathematics ,Motion compensation ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,business.industry ,Cone-Beam Computed Tomography ,Total variation denoising ,030220 oncology & carcinogenesis ,Artificial intelligence ,Artifacts ,business ,Algorithms - Abstract
Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.
- Published
- 2017
5. Adaptive patch-based POCS approach for super resolution reconstruction of 4D-CT lung data
- Author
-
Yu Zhang, Tingting Wang, Wei Yang, Qianjin Feng, Lei Cao, and Wufan Chen
- Subjects
Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Computer science ,medicine.medical_treatment ,Computed tomography ,Image Enhancement ,medicine.disease ,Superresolution ,Radiation therapy ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Artificial intelligence ,Four-Dimensional Computed Tomography ,Artifacts ,Lung cancer ,business ,Projection (set theory) ,Lung ,Crucial point ,Image resolution ,Algorithms - Abstract
Image enhancement of lung four-dimensional computed tomography (4D-CT) data is highly important because image resolution remains a crucial point in lung cancer radiotherapy. In this paper, we proposed a method for lung 4D-CT super resolution (SR) by using an adaptive-patch-based projection onto convex sets (POCS) approach, which is in contrast with the global POCS SR algorithm, to recover fine details with lesser artifacts in images. The main contribution of this patch-based approach is that the interfering local structure from other phases can be rejected by employing a similar patch adaptive selection strategy. The effectiveness of our approach is demonstrated through experiments on simulated images and real lung 4D-CT datasets. A comparison with previously published SR reconstruction methods highlights the favorable characteristics of the proposed method.
- Published
- 2015
6. Anatomy-guided brain PET imaging incorporating a joint prior model
- Author
-
Qianjin Feng, Wufan Chen, Lijun Lu, Arman Rahmim, and Jianhua Ma
- Subjects
Radiological and Ultrasound Technology ,business.industry ,Computer science ,Brain ,Reconstruction algorithm ,Pet imaging ,Iterative reconstruction ,Magnetic Resonance Imaging ,Multimodal Imaging ,Article ,Fluorodeoxyglucose F18 ,Positron-Emission Tomography ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Artificial intelligence ,Noise (video) ,Tomography ,Radiopharmaceuticals ,18fdg pet ,business ,Joint (audio engineering) ,Algorithms - Abstract
We proposed a maximum a posterior (MAP) framework for incorporating information from co-registered anatomical images into PET image reconstruction through a novel anato-functional joint prior. The characteristic of the utilized hyperbolic potential function is determinate by the voxel intensity differences within the anatomical image, while the penalization is computed based on voxel intensity differences in reconstructed PET images. Using realistic simulated (18)FDG PET scan data, we optimized the performance of the proposed MAP reconstruction with the joint prior (JP-MAP) and compared its performance with conventional 3D MLEM and 3D MAP reconstructions. The proposed JP-MAP reconstruction algorithm resulted in quantitatively enhanced reconstructed images, as demonstrated in extensive FDG PET simulation study. The proposed method was also tested on a 20 min Florbetapir patient study performed on the high-resolution research tomograph. It was shown to outperform conventional methods in visual as well as quantitative accuracy assessment (in terms of regional noise versus activity value performance). The JP-MAP method was also compared with another MR-guided MAP reconstruction method, utilizing the Bowsher prior and was seen to result in some quantitative enhancements, especially in the case of MR-PET mis-registrations, and a definitive improvement in computational performance.
- Published
- 2015
7. The effect of stress rate on ratchetting behavior of rolled AZ31B magnesium alloy at 393 K and room temperature
- Author
-
Li Meng, Alexandre Tanguy, Wufan Chen, Miaolin Feng, and Simon Hallais
- Subjects
Biomaterials ,Materials science ,Polymers and Plastics ,Metals and Alloys ,Stress rate ,Plasticity ,Magnesium alloy ,Composite material ,Surfaces, Coatings and Films ,Electronic, Optical and Magnetic Materials - Published
- 2019
8. 3.5D dynamic PET image reconstruction incorporating kinetics-based clusters
- Author
-
Jing Tang, Nicolas A. Karakatsanis, Lijun Lu, Arman Rahmim, and Wufan Chen
- Subjects
Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Computer science ,business.industry ,Pattern recognition ,Iterative reconstruction ,computer.software_genre ,Models, Biological ,Article ,Kinetics ,Imaging, Three-Dimensional ,Positron emission tomography ,Voxel ,Positron-Emission Tomography ,medicine ,Maximum a posteriori estimation ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Artificial intelligence ,Tomography ,Cluster analysis ,business ,computer ,Parametric statistics - Abstract
Standard 3D dynamic positron emission tomographic (PET) imaging consists of independent image reconstructions of individual frames followed by application of appropriate kinetic model to the time activity curves at the voxel or region-of-interest (ROI). The emerging field of 4D PET reconstruction, by contrast, seeks to move beyond this scheme and incorporate information from multiple frames within the image reconstruction task. Here we propose a novel reconstruction framework aiming to enhance quantitative accuracy of parametric images via introduction of priors based on voxel kinetics, as generated via clustering of preliminary reconstructed dynamic images to define clustered neighborhoods of voxels with similar kinetics. This is then followed by straightforward maximum a posteriori (MAP) 3D PET reconstruction as applied to individual frames; and as such the method is labeled '3.5D' image reconstruction. The use of cluster-based priors has the advantage of further enhancing quantitative performance in dynamic PET imaging, because: (a) there are typically more voxels in clusters than in conventional local neighborhoods, and (b) neighboring voxels with distinct kinetics are less likely to be clustered together. Using realistic simulated (11)C-raclopride dynamic PET data, the quantitative performance of the proposed method was investigated. Parametric distribution-volume (DV) and DV ratio (DVR) images were estimated from dynamic image reconstructions using (a) maximum-likelihood expectation maximization (MLEM), and MAP reconstructions using (b) the quadratic prior (QP-MAP), (c) the Green prior (GP-MAP) and (d, e) two proposed cluster-based priors (CP-U-MAP and CP-W-MAP), followed by graphical modeling, and were qualitatively and quantitatively compared for 11 ROIs. Overall, the proposed dynamic PET reconstruction methodology resulted in substantial visual as well as quantitative accuracy improvements (in terms of noise versus bias performance) for parametric DV and DVR images. The method was also tested on a 90 min (11)C-raclopride patient study performed on the high-resolution research tomography. The proposed method was shown to outperform the conventional method in visual as well as quantitative accuracy improvements (in terms of noise versus regional DVR value performance).
- Published
- 2012
9. Thoracic low-dose CT image processing using an artifact suppressed large-scale nonlocal means
- Author
-
Guanyu Yang, Yinsheng Li, Zhou Yang, Yongcheng Zhu, Limin Luo, Yang Chen, Christine Toumoulin, Yining Hu, Wufan Chen, Centre de Recherche en Information Biomédicale sino-français (CRIBS), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Southeast University [Jiangsu]-Institut National de la Santé et de la Recherche Médicale (INSERM), Laboratory of Image Science and Technology [Nanjing] (LIST), Southeast University [Jiangsu]-School of Computer Science and Engineering, Laboratoire Traitement du Signal et de l'Image (LTSI), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National de la Santé et de la Recherche Médicale (INSERM), School of Biomedical Engineering, Southern medical university, This research was supported by National Basic Research Program of China under grant (2010CB732503), National Natural Science Foundation under grants (81000636, 31100713, 81101104, 60801009), and the Project supported by Natural Science Foundations of Jiangsu Province (BK2009012 and BK2011593)., Université de Rennes (UR)-Southeast University [Jiangsu]-Institut National de la Santé et de la Recherche Médicale (INSERM), Université de Rennes (UR)-Institut National de la Santé et de la Recherche Médicale (INSERM), and Toumoulin, Christine
- Subjects
Scale (ratio) ,Computer science ,Radiography ,Streak ,Normal tissue ,Computed tomography ,Image processing ,02 engineering and technology ,Radiation Dosage ,Stationary wavelet transform (SWT) ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Image Processing, Computer-Assisted ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Low-dose CT (LDCT) ,[SDV.IB] Life Sciences [q-bio]/Bioengineering ,Artifact (error) ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Phantoms, Imaging ,business.industry ,Noise (signal processing) ,Artifact Suppressed Large-scale Nonlocal Means (AS-LNLM) ,Radiation exposure ,Nonlinear Dynamics ,Radiography, Thoracic ,[SDV.IB]Life Sciences [q-bio]/Bioengineering ,020201 artificial intelligence & image processing ,Tomography ,Artificial intelligence ,Artifacts ,Tomography, X-Ray Computed ,business ,Streak artifacts - Abstract
International audience; The x-ray exposure to patients has become a major concern in computed tomography (CT) and minimizing the radiation exposure has been one of the major efforts in the CT field. Due to plenty high-attenuation tissues in the human chest, under low-dose scan protocols, thoracic low-dose CT (LDCT) images tend to be severely degraded by excessive mottled noise and non-stationary streak artifacts. Their removal is rather a challenging task because the streak artifacts with directional prominence are often hard to discriminate from the attenuation information of normal tissues. This paper describes a two-step processing scheme called 'artifact suppressed large-scale nonlocal means' for suppressing both noise and artifacts in thoracic LDCT images. Specific scale and direction properties were exploited to discriminate the noise and artifacts from image structures. Parallel implementation has been introduced to speed up the whole processing by more than 100 times. Phantom and patient CT images were both acquired for evaluation purpose. Comparative qualitative and quantitative analyses were both performed that allows conclusion on the efficacy of our method in improving thoracic LDCT data.
- Published
- 2012
10. Promote quantitative ischemia imaging via myocardial perfusion CT iterative reconstruction with tensor total generalized variation regularization
- Author
-
Zhang Zhang, Dong Zeng, Jing Huang, Zhaoying Bian, Wufan Chen, Sui Li, Ji He, Hao Zhang, Bo Chen, Jiahui Lin, Chengwei Gu, Jianhua Ma, Shanzhou Niu, and Dazhe Zhao
- Subjects
Swine ,Computer science ,Myocardial Ischemia ,Ischemia ,Streak ,Image processing ,Computed tomography ,Iterative reconstruction ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Image Processing, Computer-Assisted ,medicine ,Animals ,Humans ,Radiology, Nuclear Medicine and imaging ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Phantoms, Imaging ,business.industry ,Radiation dose ,Myocardial Perfusion Imaging ,Pattern recognition ,medicine.disease ,030220 oncology & carcinogenesis ,Tomography ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Perfusion ,Algorithms - Abstract
Myocardial perfusion computed tomography (MPCT) imaging is commonly used to detect myocardial ischemia quantitatively. A limitation in MPCT is that an additional radiation dose is required compared to unenhanced CT due to its repeated dynamic data acquisition. Meanwhile, noise and streak artifacts in low-dose cases are the main factors that degrade the accuracy of quantifying myocardial ischemia and hamper the diagnostic utility of the filtered backprojection reconstructed MPCT images. Moreover, it is noted that the MPCT images are composed of a series of 2/3D images, which can be naturally regarded as a 3/4-order tensor, and the MPCT images are globally correlated along time and are sparse across space. To obtain higher fidelity ischemia from low-dose MPCT acquisitions quantitatively, we propose a robust statistical iterative MPCT image reconstruction algorithm by incorporating tensor total generalized variation (TTGV) regularization into a penalized weighted least-squares framework. Specifically, the TTGV regularization fuses the spatial correlation of the myocardial structure and the temporal continuation of the contrast agent intake during the perfusion. Then, an efficient iterative strategy is developed for the objective function optimization. Comprehensive evaluations have been conducted on a digital XCAT phantom and a preclinical porcine dataset regarding the accuracy of the reconstructed MPCT images, the quantitative differentiation of ischemia and the algorithm's robustness and efficiency.
- Published
- 2018
11. A kinematic hardening constitutive model for the uniaxial cyclic stress–strain response of magnesium sheet alloys at room temperature
- Author
-
Wufan Chen, Miaolin Feng, Zhitao He, and Fenghua Wang
- Subjects
010302 applied physics ,Cyclic stress ,Materials science ,Polymers and Plastics ,Magnesium ,Constitutive equation ,Metallurgy ,Metals and Alloys ,chemistry.chemical_element ,02 engineering and technology ,Strain hardening exponent ,021001 nanoscience & nanotechnology ,01 natural sciences ,Surfaces, Coatings and Films ,Electronic, Optical and Magnetic Materials ,Biomaterials ,chemistry ,0103 physical sciences ,Hardening (metallurgy) ,Kinematic hardening ,Strain response ,0210 nano-technology - Published
- 2017
12. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning
- Author
-
Yongjie Li, Dezhong Yao, Jonathan Yao, and Wufan Chen
- Subjects
Male ,Mathematical optimization ,Optimization problem ,Meta-optimization ,Population ,Models, Biological ,Robustness (computer science) ,Conjugate gradient method ,Humans ,Computer Simulation ,Radiology, Nuclear Medicine and imaging ,Multi-swarm optimization ,Radiometry ,education ,Metaheuristic ,Mathematics ,education.field_of_study ,Radiological and Ultrasound Technology ,Radiotherapy Planning, Computer-Assisted ,Prostatic Neoplasms ,Particle swarm optimization ,Dose-Response Relationship, Radiation ,Radiotherapy Dosage ,Treatment Outcome ,Body Burden ,Radiotherapy, Conformal ,Algorithm ,Algorithms ,Relative Biological Effectiveness - Abstract
Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated.
- Published
- 2005
13. Thoracic low-dose CT image processing using an artifact suppressed large-scale nonlocal means.
- Author
-
Yang Chen, Zhou Yang, Yining Hu, Guanyu Yang, Yongcheng Zhu, Yinsheng Li, Limin Luo, Wufan Chen, and Toumoulin, Christine
- Subjects
CHEST X rays ,TOMOGRAPHY ,IMAGE processing ,MEDICAL artifacts ,THERAPEUTIC use of x-rays ,RADIATION exposure ,RADIATION doses - Abstract
The x-ray exposure to patients has become a major concern in computed tomography (CT) and minimizing the radiation exposure has been one of the major efforts in the CT field. Due to plenty high-attenuation tissues in the human chest, under low-dose scan protocols, thoracic low-dose CT (LDCT) images tend to be severely degraded by excessive mottled noise and non-stationary streak artifacts. Their removal is rather a challenging task because the streak artifacts with directional prominence are often hard to discriminate from the attenuation information of normal tissues. This paper describes a two-step processing scheme called 'artifact suppressed large-scale nonlocal means' for suppressing both noise and artifacts in thoracic LDCT images. Specific scale and direction properties were exploited to discriminate the noise and artifacts from image structures. Parallel implementation has been introduced to speed up the whole processing by more than 100 times. Phantom and patient CT images were both acquired for evaluation purpose. Comparative qualitative and quantitative analyses were both performed that allows conclusion on the efficacy of our method in improving thoracic LDCT data. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
14. Learning image context for segmentation of the prostate in CT-guided radiotherapy.
- Author
-
Wei Li, Shu Liao, Qianjin Feng, Wufan Chen, and Dinggang Shen
- Subjects
PROSTATE cancer treatment ,CANCER tomography ,RADIOTHERAPY ,IMAGE reconstruction ,TUMOR classification ,ELECTROTHERAPEUTICS - Abstract
Accurate segmentation of the prostate is the key to the success of external beam radiotherapy of prostate cancer. However, accurate segmentation of the prostate in computer tomography (CT) images remains challenging mainly due to three factors: (1) low image contrast between the prostate and its surrounding tissues, (2) unpredictable prostate motion across different treatment days and (3) large variations of intensities and shapes of the bladder and rectum around the prostate. In this paper, an online-learning and patient-specific classification method based on the location-adaptive image context is presented to deal with all these challenging issues and achieve the precise segmentation of the prostate in CT images. Specifically, two sets of location-adaptive classifiers are placed, respectively, along the two coordinate directions of the planning image space of a patient, and further trained with the planning image and also the previoussegmented treatment images of the same patient to jointly perform prostate segmentation for a new treatment image (of the same patient). In particular, each location-adaptive classifier, which itself consists of a set of sequential subclassifiers, is recursively trained with both the static image appearance features and the iteratively updated image context features (extracted at different scales and orientations) for better identification of each prostate region. The proposed learning-based prostate segmentation method has been extensively evaluated on 161 images of 11 patients, each with more than nine daily treatment threedimensional CT images. Our method achieves the mean Dice value 0.908 and the mean ± SD of average surface distance value 1.40 ± 0.57 mm. Its performance is also compared with three prostate segmentation methods, indicating the best segmentation accuracy by the proposed method among all methods under comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
15. Promote quantitative ischemia imaging via myocardial perfusion CT iterative reconstruction with tensor total generalized variation regularization.
- Author
-
Chengwei Gu, Dong Zeng, Jiahui Lin, Sui Li, Ji He, Hao Zhang, Zhaoying Bian, Shanzhou Niu, Zhang Zhang, Jing Huang, Bo Chen, Dazhe Zhao, Wufan Chen, and Jianhua Ma
- Subjects
ISCHEMIA diagnosis ,COMPUTED tomography ,MYOCARDIAL perfusion imaging - Abstract
Myocardial perfusion computed tomography (MPCT) imaging is commonly used to detect myocardial ischemia quantitatively. A limitation in MPCT is that an additional radiation dose is required compared to unenhanced CT due to its repeated dynamic data acquisition. Meanwhile, noise and streak artifacts in low-dose cases are the main factors that degrade the accuracy of quantifying myocardial ischemia and hamper the diagnostic utility of the filtered backprojection reconstructed MPCT images. Moreover, it is noted that the MPCT images are composed of a series of 2/3D images, which can be naturally regarded as a 3/4-order tensor, and the MPCT images are globally correlated along time and are sparse across space. To obtain higher fidelity ischemia from low-dose MPCT acquisitions quantitatively, we propose a robust statistical iterative MPCT image reconstruction algorithm by incorporating tensor total generalized variation (TTGV) regularization into a penalized weighted least-squares framework. Specifically, the TTGV regularization fuses the spatial correlation of the myocardial structure and the temporal continuation of the contrast agent intake during the perfusion. Then, an efficient iterative strategy is developed for the objective function optimization. Comprehensive evaluations have been conducted on a digital XCAT phantom and a preclinical porcine dataset regarding the accuracy of the reconstructed MPCT images, the quantitative differentiation of ischemia and the algorithm's robustness and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. A kinematic hardening constitutive model for the uniaxial cyclic stress–strain response of magnesium sheet alloys at room temperature.
- Author
-
Zhitao He, Wufan Chen, Fenghua Wang, and Miaolin Feng
- Published
- 2017
- Full Text
- View/download PDF
17. Super-resolution reconstruction of 4D-CT lung data via patch-based low-rank matrix reconstruction.
- Author
-
Shiting Fang, Huafeng Wang, Yueliang Liu, Minghui Zhang, Wei Yang, Qianjin Feng, Wufan Chen, and Yu Zhang
- Subjects
HIGH resolution imaging ,COMPUTED tomography ,LUNG disease diagnosis - Abstract
Lung 4D computed tomography (4D-CT), which is a time-resolved CT data acquisition, performs an important role in explicitly including respiratory motion in treatment planning and delivery. However, the radiation dose is usually reduced at the expense of inter-slice spatial resolution to minimize radiation-related health risk. Therefore, resolution enhancement along the superior–inferior direction is necessary. In this paper, a super-resolution (SR) reconstruction method based on a patch low-rank matrix reconstruction is proposed to improve the resolution of lung 4D-CT images. Specifically, a low-rank matrix related to every patch is constructed by using a patch searching strategy. Thereafter, the singular value shrinkage is employed to recover the high-resolution patch under the constraints of the image degradation model. The output high-resolution patches are finally assembled to output the entire image. This method is extensively evaluated using two public data sets. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 9.7%–33.4% and the edge width by 11.4%–24.3%, relative to linear interpolation, back projection (BP) and Zhang et al’s algorithm. A new algorithm has been developed to improve the resolution of 4D-CT. In all experiments, the proposed method outperforms various interpolation methods, as well as BP and Zhang et al’s method, thus indicating the effectivity and competitiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. Iterative reconstruction for dual energy CT with an average image-induced nonlocal means regularization.
- Author
-
Houjin Zhang, Dong Zeng, Jiahui Lin, Hao Zhang, Zhaoying Bian, Jing Huang, Yuanyuan Gao, Shanli Zhang, Hua Zhang, Qianjin Feng, Zhengrong Liang, Wufan Chen, and Jianhua Ma
- Subjects
DUAL energy CT (Tomography) ,REAR-screen projection ,MEDICAL imaging systems - Abstract
Reducing radiation dose in dual energy computed tomography (DECT) is highly desirable but it may lead to excessive noise in the filtered backprojection (FBP) reconstructed DECT images, which can inevitably increase the diagnostic uncertainty. To obtain clinically acceptable DECT images from low-mAs acquisitions, in this work we develop a novel scheme based on measurement of DECT data. In this scheme, inspired by the success of edge-preserving non-local means (NLM) filtering in CT imaging and the intrinsic characteristics underlying DECT images, i.e. global correlation and non-local similarity, an averaged image induced NLM-based (aviNLM) regularization is incorporated into the penalized weighted least-squares (PWLS) framework. Specifically, the presented NLM-based regularization is designed by averaging the acquired DECT images, which takes the image similarity within the two energies into consideration. In addition, the weighted least-squares term takes into account DECT data-dependent variance. For simplicity, the presented scheme was termed as ‘PWLS-aviNLM’. The performance of the presented PWLS-aviNLM algorithm was validated and evaluated on digital phantom, physical phantom and patient data. The extensive experiments validated that the presented PWLS-aviNLM algorithm outperforms the FBP, PWLS-TV and PWLS-NLM algorithms quantitatively. More importantly, it delivers the best qualitative results with the finest details and the fewest noise-induced artifacts, due to the aviNLM regularization learned from DECT images. This study demonstrated the feasibility and efficacy of the presented PWLS-aviNLM algorithm to improve the DECT reconstruction and resulting material decomposition. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization.
- Author
-
Hua Zhang, Jianhua Ma, Zhaoying Bian, Dong Zeng, Qianjin Feng, and Wufan Chen
- Subjects
CONE beam computed tomography ,FOUR-dimensional imaging ,MATHEMATICAL regularization - Abstract
Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study.
- Author
-
Dong Zeng, Changfei Gong, Zhaoying Bian, Jing Huang, Xinyu Zhang, Hua Zhang, Lijun Lu, Shanzhou Niu, Zhang Zhang, Zhengrong Liang, Qianjin Feng, Wufan Chen, and Jianhua Ma
- Subjects
CORONARY disease ,MYOCARDIAL reperfusion ,COMPUTED tomography ,ENHANCED magnetoresistance ,RADIATION ,MAGNETIC resonance imaging - Abstract
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. Adaptive patch-based POCS approach for super resolution reconstruction of 4D-CT lung data.
- Author
-
Tingting Wang, Lei Cao, Wei Yang, Qianjin Feng, Wufan Chen, and Yu Zhang
- Subjects
FOUR-dimensional imaging ,COMPUTED tomography ,CANCER radiotherapy research ,OPTICAL resolution ,BIG data ,SIMULATION methods & models - Abstract
Image enhancement of lung four-dimensional computed tomography (4D-CT) data is highly important because image resolution remains a crucial point in lung cancer radiotherapy. In this paper, we proposed a method for lung 4D-CT super resolution (SR) by using an adaptive-patch-based projection onto convex sets (POCS) approach, which is in contrast with the global POCS SR algorithm, to recover fine details with lesser artifacts in images. The main contribution of this patch-based approach is that the interfering local structure from other phases can be rejected by employing a similar patch adaptive selection strategy. The effectiveness of our approach is demonstrated through experiments on simulated images and real lung 4D-CT datasets. A comparison with previously published SR reconstruction methods highlights the favorable characteristics of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
22. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.
- Author
-
Jinhong Huang, Li Guo, Qianjin Feng, Wufan Chen, and Yanqiu Feng
- Subjects
IMAGE reconstruction ,MAGNETIC resonance imaging ,MEDICAL imaging systems ,MATHEMATICAL optimization ,SIMULATION methods & models - Abstract
Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. Anatomy-guided brain PET imaging incorporating a joint prior model.
- Author
-
Lijun Lu, Jianhua Ma, Qianjin Feng, Wufan Chen, and Arman Rahmim
- Subjects
BRAIN imaging ,POSITRON emission tomography ,BRAIN anatomy ,IMAGE reconstruction ,HIGH resolution imaging ,A posteriori error analysis - Abstract
We proposed a maximum a posterior (MAP) framework for incorporating information from co-registered anatomical images into PET image reconstruction through a novel anato-functional joint prior. The characteristic of the utilized hyperbolic potential function is determinate by the voxel intensity differences within the anatomical image, while the penalization is computed based on voxel intensity differences in reconstructed PET images. Using realistic simulated
18 FDG PET scan data, we optimized the performance of the proposed MAP reconstruction with the joint prior (JP-MAP) and compared its performance with conventional 3D MLEM and 3D MAP reconstructions. The proposed JP-MAP reconstruction algorithm resulted in quantitatively enhanced reconstructed images, as demonstrated in extensive FDG PET simulation study. The proposed method was also tested on a 20 min Florbetapir patient study performed on the high-resolution research tomograph. It was shown to outperform conventional methods in visual as well as quantitative accuracy assessment (in terms of regional noise versus activity value performance). The JP-MAP method was also compared with another MR-guided MAP reconstruction method, utilizing the Bowsher prior and was seen to result in some quantitative enhancements, especially in the case of MR-PET mis-registrations, and a definitive improvement in computational performance. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.