14 results on '"Jingyan Xu"'
Search Results
2. Adaptive smoothing algorithms for MBIR in CT applications
- Author
-
Frédéric Noo and Jingyan Xu
- Subjects
Rate of convergence ,Computer science ,Convergence (routing) ,Convex optimization ,Regular polygon ,Iterative reconstruction ,Extension (predicate logic) ,Type (model theory) ,Algorithm ,Dual (category theory) - Abstract
Many model based image reconstruction (MBIR) methods for x-ray CT are formulated as convex minimization problems. If the objective function is nonsmooth. primal-dual algorithms are applicable with the drawback that there is an increased memory cost due to the dual variables. Some algorithms recently developed for large-scale nonsmooth convex programs use adaptive smoothing techniques and are of the primal type. That is, they achieve convergence without introducing the dual variables, hence without the increased memory. We discuss one such algorithm with an O(1/k) convergence rate, where k is the iteration number. We then present an extension of it to handle strong convex objective functions. This new algorithm has the optimal convergence rate of O(1/k 2) for its problem class. Our preliminary numerical studies demonstrate competitive performance with respect to an alternative method.
- Published
- 2019
- Full Text
- View/download PDF
3. Model-based image reconstruction with a hybrid regularizer
- Author
-
Frédéric Noo and Jingyan Xu
- Subjects
Piecewise linear function ,Data acquisition ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Piecewise ,A priori and a posteriori ,Iterative reconstruction ,Inverse problem ,Constant (mathematics) ,Algorithm ,Image (mathematics) - Abstract
Model based image reconstruction often includes regularizers to encourage a priori image information and stabilize the ill-posed inverse problem. Popular edge preserving regularizers often penalize the first order differences of image intensity values. In this work, we propose a hybrid regularizer that additionally penalizes the gradient of an auxiliary variable embedded in the half-quadratic reformulation of some popular edge preserving functions. As the auxiliary variable contain the gradient information, the hybrid regularizer penalizes both the first order and the second order image intensity differences, hence encourages both piecewise constant and piecewise linear image intensity values. Our experimental data using combined physical data acquisition and computer simulations demonstrate the effectiveness of the hybrid regularizer in reducing the stair-casing artifact of the TV penalty, and producing smooth intensity variations.
- Published
- 2018
- Full Text
- View/download PDF
4. 3D-guided CT reconstruction using time-of-flight camera
- Author
-
Benjamin M. W. Tsui, Katsuyuki Taguchi, Mahmoud Ismail, Emad M. Boctor, and Jingyan Xu
- Subjects
Time-of-flight camera ,Radon transform ,Computer science ,business.industry ,Image quality ,Coordinate system ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,X-ray detector ,Iterative reconstruction ,Linear interpolation ,Imaging phantom ,Camera auto-calibration ,Pinhole camera model ,Computer vision ,Artificial intelligence ,business ,Projection (set theory) ,ComputingMethodologies_COMPUTERGRAPHICS ,Camera resectioning - Abstract
We propose the use of a time-of-flight (TOF) camera to obtain the patient's body contour in 3D guided imaging reconstruction scheme in CT and C-arm imaging systems with truncated projection. In addition to pixel intensity, a TOF camera provides the 3D coordinates of each point in the captured scene with respect to the camera coordinates. Information from the TOF camera was used to obtain a digitized surface of the patient's body. The digitization points are transformed to X-Ray detector coordinates by registering the two coordinate systems. A set of points corresponding to the slice of interest are segmented to form a 2D contour of the body surface. Radon transform is applied to the contour to generate the 'trust region' for the projection data. The generated 'trust region' is integrated as an input to augment the projection data. It is used to estimate the truncated, unmeasured projections using linear interpolation. Finally the image is reconstructed using the combination of the estimated and the measured projection data. The proposed method is evaluated using a physical phantom. Projection data for the phantom were obtained using a C-arm system. Significant improvement in the reconstructed image quality near the truncation edges was observed using the proposed method as compared to that without truncation correction. This work shows that the proposed 3D guided CT image reconstruction using a TOF camera represents a feasible solution to the projection data truncation problem.
- Published
- 2011
- Full Text
- View/download PDF
5. Iterative volume of interest image reconstruction in helical cone beam X-Ray CT using a stored system matrix approach
- Author
-
Benjamin M. W. Tsui and Jingyan Xu
- Subjects
business.industry ,Image quality ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Iterative reconstruction ,Projection (linear algebra) ,law.invention ,Image (mathematics) ,Matrix (mathematics) ,Projector ,law ,Computer data storage ,Computer vision ,Artificial intelligence ,business ,Image restoration ,Interpolation - Abstract
We present an efficient scheme for the forward and backward projector implementation for helical cone-beam x-ray CT reconstruction using a pre-calculated and stored system matrix approach. Because of the symmetry of a helical source trajectory, it is sufficient to calculate and store the system matrix entries for one image slice only and for all source positions illuminating it. The system matrix entries for other image slices are copies of those stored values. In implementing an iterative image reconstruction method, the internal 3D image volume can be based on a non-Cartesian grid so that no system matrix interpolation is needed for the repeated forward and backward projection calculation. Using the proposed scheme, the memory requirement for the reconstruction of a full field-of-view of clinical scanners is manageable on current computing platforms. The same storage principle can be generalized and applied to iterative volume-of-interest image reconstruction for helical cone-beam CT. We demonstrate by both computer simulations and clinical patient data the speed and image quality of VOI image reconstruction using the proposed stored system matrix approach. We believe the proposed method may contribute to bringing iterative reconstruction to the clinical practice.
- Published
- 2011
- Full Text
- View/download PDF
6. Design and development of MR-compatible SPECT systems for simultaneous SPECT-MR imaging of small animals
- Author
-
Abdel Monem M. El-Sharkawy, James Hugg, William A. Edelstein, Bradley E. Patt, Douglas J. Wagenaar, Si Chen, Dirk Meier, Benjamin M. W. Tsui, and Jingyan Xu
- Subjects
Physics ,medicine.diagnostic_test ,Pixel ,business.industry ,Detector ,Collimator ,Iterative reconstruction ,Single-photon emission computed tomography ,law.invention ,Data acquisition ,law ,medicine ,Computer vision ,Artificial intelligence ,business ,Image restoration ,Biomedical engineering ,Radiofrequency coil - Abstract
We describe a continuing design and development of MR-compatible SPECT systems for simultaneous SPECT-MR imaging of small animals. A first generation prototype SPECT system was designed and constructed to fit inside a MRI system with a gradient bore inner diameter of 12 cm. It consists of 3 angularly offset rings of 8 detectors (1"x1", 16x16 pixels MR-compatible solid-state CZT). A matching 24-pinhole collimator sleeve, made of a tungsten-compound, provides projections from a common FOV of ~25 mm. A birdcage RF coil for MRI data acquisition surrounds the collimator. The SPECT system was tested inside a clinical 3T MRI system. Minimal interference was observed on the simultaneously acquired SPECT and MR images. We developed a sparse-view image reconstruction method based on accurate modeling of the point response function (PRF) of each of the 24 pinholes to provide artifact-free SPECT images. The stationary SPECT system provides relatively low resolution of 3-5 mm but high geometric efficiency of 0.5- 1.2% for fast dynamic acquisition, demonstrated in a SPECT renal kinetics study using Tc-99m DTPA. Based on these results, a second generation prototype MR-compatible SPECT system with an outer diameter of 20 cm that fits inside a mid-sized preclinical MRI system is being developed. It consists of 5 rings of 19 CZT detectors. The larger ring diameter allows the use of optimized multi-pinhole collimator designs, such as high system resolution up to ~1 mm, high geometric efficiency, or lower system resolution without collimator rotation. The anticipated performance of the new system is supported by simulation data.
- Published
- 2011
- Full Text
- View/download PDF
7. SPECT data acquisition and image reconstruction in a stationary small animal SPECT/MRI system
- Author
-
Douglas J. Wagenaar, Jianhua Yu, Si Chen, Dirk Meier, Jingyan Xu, Bradley E. Patt, and Benjamin M. W. Tsui
- Subjects
medicine.diagnostic_test ,Computer science ,Aperture ,business.industry ,Collimator ,Magnetic resonance imaging ,Iterative reconstruction ,Single-photon emission computed tomography ,Imaging phantom ,law.invention ,law ,medicine ,Computer vision ,Pinhole (optics) ,Artificial intelligence ,Projection (set theory) ,business ,Image resolution ,Image restoration - Abstract
The goal of the study was to investigate data acquisition strategies and image reconstruction methods for a stationary SPECT insert that can operate inside an MRI scanner with a 12 cm bore diameter for simultaneous SPECT/MRI imaging of small animals. The SPECT insert consists of 3 octagonal rings of 8 MR-compatible CZT detectors per ring surrounding a multi-pinhole (MPH) collimator sleeve. Each pinhole is constructed to project the field-of-view (FOV) to one CZT detector. All 24 pinholes are focused to a cylindrical FOV of 25 mm in diameter and 34 mm in length. The data acquisition strategies we evaluated were optional collimator rotations to improve tomographic sampling; and the image reconstruction methods were iterative ML-EM with and without compensation for the geometric response function (GRF) of the MPH collimator. For this purpose, we developed an analytic simulator that calculates the system matrix with the GRF models of the MPH collimator. The simulator was used to generate projection data of a digital rod phantom with pinhole aperture sizes of 1 mm and 2 mm and with different collimator rotation patterns. Iterative ML-EM reconstruction with and without GRF compensation were used to reconstruct the projection data from the central ring of 8 detectors only, and from all 24 detectors. Our results indicated that without GRF compensation and at the default design of 24 projection views, the reconstructed images had significant artifacts. Accurate GRF compensation substantially improved the reconstructed image resolution and reduced image artifacts. With accurate GRF compensation, useful reconstructed images can be obtained using 24 projection views only. This last finding potentially enables dynamic SPECT (and/or MRI) studies in small animals, one of many possible application areas of the SPECT/MRI system. Further research efforts are warranted including experimentally measuring the system matrix for improved geometrical accuracy, incorporating the co-registered MRI image in SPECT reconstruction, and exploring potential applications of the simultaneous SPECT/MRI SA system including dynamic SPECT studies.
- Published
- 2010
- Full Text
- View/download PDF
8. Enhanced discrimination of calcified and soft arterial plaques using computed tomography with a multi-energy-window photon counting x-ray detector
- Author
-
Eric C. Frey, Douglas J. Wagenaar, Xiaolan Wang, Jingyan Xu, Bradley E. Patt, and Katsuyuki Taguchi
- Subjects
Physics ,medicine.medical_specialty ,Energy window ,Photon ,medicine.diagnostic_test ,business.industry ,Quantitative Biology::Tissues and Organs ,Detector ,X-ray detector ,Computed tomography ,Photon counting ,Signal-to-noise ratio ,Optics ,medicine ,Medical physics ,business ,Energy (signal processing) - Abstract
This work aims at discriminating between soft and calcified coronary artery plaques using microCT with a multi-energywindow photon counting X-ray detector (PCXD). We have previously investigated a solid state X-ray detector which has the capability to count individual photons in different energy windows. The data from these energy windows may be treated as multiple simultaneous X-ray acquisitions within non-overlapping energy windows that can provide additional information about tissue differences. In this work, we simulated a photon counting detector with five energy windows. We investigated two approaches for using the energy information provided by this detector. First, we applied energy weighting to the reconstruction from different energy windows to improve the signal-to-noise ratio between calcified and soft plaques. This resulted in a significant improvement in the signal-to-noise ratio. Second, we applied the basis material decomposition method to discriminate coronary artery plaques based on their calcium content. The results were compared with those obtained using dual-kVp material decomposition. We observed significantly improved contrast-tonoise ratios for the PCXD-based approaches.
- Published
- 2009
- Full Text
- View/download PDF
9. A dual formulation of a penalized maximum likelihood x-ray CT reconstruction problem
- Author
-
Benjamin M. W. Tsui, Jingyan Xu, Katsuyuki Taguchi, and Grant T. Gullberg
- Subjects
Hessian matrix ,Mathematical optimization ,Optimization problem ,Computer science ,Maximum likelihood ,Diagonal ,Duality (optimization) ,Reconstruction algorithm ,Iterative reconstruction ,symbols.namesake ,Matrix (mathematics) ,symbols ,Projection (set theory) ,Gradient descent ,Algorithm - Abstract
This work studies the dual formulation of a penalized maximum likelihood reconstruction problem in x-ray CT. The primal objective function is a Poisson log-likelihood combined with a weighted cross-entropy penalty term. The dual formulation of the primal optimization problem is then derived and the optimization procedure outlined. The dual formulation better exploits the structure of the problem, which translates to faster convergence of iterative reconstruction algorithms. A gradient descent algorithm is implemented for solving the dual problem and its performance is compared with the filtered back-projection algorithm, and with the primal formulation optimized by using surrogate functions. The 3D XCAT phantom and an analytical x-ray CT simulator are used to generate noise-free and noisy CT projection data set with monochromatic and polychromatic x-ray spectrums. The reconstructed images from the dual formulation delineate the internal structures at early iterations better than the primal formulation using surrogate functions. However the body contour is slower to converge in the dual than in the primal formulation. The dual formulation demonstrate better noise-resolution tradeoff near the internal organs than the primal formulation. Since the surrogate functions in general can provide a diagonal approximation of the Hessian matrix of the objective function, further convergence speed up may be achieved by deriving the surrogate function of the dual objective function.
- Published
- 2009
- Full Text
- View/download PDF
10. Electronic noise compensation in iterative x-ray CT reconstruction
- Author
-
Benjamin M. W. Tsui and Jingyan Xu
- Subjects
symbols.namesake ,Mathematical optimization ,Compound Poisson distribution ,Compound Poisson process ,Expectation–maximization algorithm ,symbols ,Applied mathematics ,Iterative reconstruction ,Likelihood function ,Poisson distribution ,Conditional expectation ,Random variable ,Mathematics - Abstract
Electronic noise compensation can be important for low-dose x-ray CT applications where severe photon starvation occurs. For clinical x-ray CT systems utilizing energy-integrating detectors, it has been shown that the detected x-ray intensity is compound Poisson distributed, instead of the Poisson distribution that is often studied in the literature. We model the electronic noise contaminated signal Z as the sum of a compound Poisson distributed random variable (r.v.) Y and a Gaussian distributed electronic noise N with known mean and variance. We formulate the iterative x-ray CT reconstruction problem with electronic noise compensation as a maximum-likelihood reconstruction problem. However the likelihood function of Z is not analytically trackable; instead of working with it directly, we formulate the problem in the expectation-maximization (EM) framework, and iteratively maximize the conditional expectation of the complete log-likelihood of Y . We further demonstrate that the conditional expectation of the surrogate function of the complete log-likelihood is a legitimate surrogate for the incomplete surrogate. Under certain linearity conditions on the surrogate function, a reconstruction algorithm with electronic noise compensation can be obtained by some modification of one previously derived without electronic noise considerations; the change incurred is simply replacing the unavailable, uncontaminated measurement Y by its conditional expectation E ( Y | Z ). The calculation of E ( Y | Z ) depends on the model of Y , N , and Z . We propose two methods for calculating this conditional expectation when Y follows a special compound Poisson distribution - the exponential dispersion model (ED). Our methods can be regarded as an extension of similar approaches using the Poisson model to the compound Poisson model.
- Published
- 2008
- Full Text
- View/download PDF
11. Microcomputed tomography with a photon-counting x-ray detector
- Author
-
Benjamin M. W. Tsui, I. Ninive, T. Orskaug, Douglas J. Wagenaar, Jingyan Xu, M. Kapusta, Bradley E. Patt, Eric C. Frey, and Katsuyuki Taguchi
- Subjects
Physics ,Scanner ,Optics ,Pixel ,Physics::Instrumentation and Detectors ,business.industry ,Detector ,X-ray detector ,Image noise ,business ,Noise (electronics) ,Thresholding ,Photon counting - Abstract
In this work we used a novel CdTe photon counting x-ray detector capable of very high count rates to perform x-ray micro-computed tomography (microCT). The detector had 2 rows of 384 square pixels each 1 mm in size. Charge signals from individual photons were integrated with a shaping time of ~60 ns and processed by an ASIC located in close proximity to the pixels. The ASIC had 5 energy thresholds with associated independent counters for each pixel. Due to the thresholding, it is possible to eliminate dark-current contributions to image noise. By subtracting counter outputs from adjacent thresholds, it is possible to obtain the number of x-ray photon counts in 5 adjacent energy windows. The detector is capable of readout times faster than 5 ms. A prototype bench-top specimen μCT scanner was assembled having distances from the tube to the object and detector of 11 cm and 82 cm, respectively. We used a conventional x-ray source to produce 80 kVp x-ray beams with tube currents up to 400 μA resulting in count rates on the order of 600 kcps per pixel at the detector. Both phantoms and a dead mouse were imaged using acquisition times of 1.8 s per view at 1° steps around the object. The count rate loss (CRL) characteristics of the detector were measured by varying the tube current and corrected for using a paralyzable model. Images were reconstructed using analytical fan-beam reconstruction. The reconstructed images showed good contrast and noise characteristics and those obtained from different energy windows demonstrated energy-dependent contrast, thus potentially allowing for material decomposition.
- Published
- 2007
- Full Text
- View/download PDF
12. A Poisson likelihood iterative reconstruction algorithm for material decomposition in CT
- Author
-
Katsuyuki Taguchi, Jingyan Xu, Benjamin M. W. Tsui, and Eric C. Frey
- Subjects
Noise ,Attenuation ,Basis function ,Domain decomposition methods ,Iterative reconstruction ,Linear combination ,Projection (set theory) ,Algorithm ,Energy (signal processing) ,Mathematics - Abstract
Emerging photon-counting detectors with energy discrimination ability for x-ray CT perform binning according to the energy of the incoming photons. Multiple output channels with different energy thresholds can be obtained in one irradiation. The energy dependency of attenuation coefficients can be described by a linear combination of basis functions, e.g., Compton scatter and photo-electric effect; their individual contributions can be differentiated by using the multiple energy channels hence material characterization is made possible. Conventional analytic approach is a two-step process. First decompose in the projection domain to obtain the sinograms corresponding to the coefficients of the basis functions, then apply FBP to obtain the individual material components. This two-step process may have poor quality and quantitative accuracy due to the lower counts in the separate energy channels and approximation errors propagated to the image domain from projection domain decomposition. In this work we modeled the energy dependency of linear attenuation coefficients in our problem formulation and applied the optimality transfer principle to derive a Poisson-likelihood based algorithm for material decomposition from multiple energy channels. Our algorithm reconstructs the coefficients of the basis functions directly therefore the separate non-linear estimation step in the projection domain as in conventional approaches is avoided. We performed simulations to study the accuracy and noise properties of our method. We synthesized the linear attenuation coefficients at a reference energy and compared with standard attenuation values provided by NIST. We also synthesized the attenuation maps at different effective energy bin centers corresponding to the different energy channels and compared the synthesized images with reconstructions from standard fan-beam FBP methods. Preliminary simulations showed that our reconstructed images have much better noise properties.
- Published
- 2007
- Full Text
- View/download PDF
13. Image-domain material decomposition using photon-counting CT
- Author
-
Eric C. Frey, Katsuyuki Taguchi, Mengxi Zhang, Benjamin M. W. Tsui, W. Paul Segars, and Jingyan Xu
- Subjects
Physics ,Photon ,Optics ,business.industry ,Attenuation ,Detector ,Compton scattering ,X-ray detector ,business ,Bin ,Imaging phantom ,Photon counting - Abstract
Novel CdTe photon counting x-ray detectors (PCXDs) have been developed for very high count rates [1-4] suitable for x-ray micro computed tomography (μCT) scanners. It counts photons within each of J energy bins. In this study, we investigate use of the data in these energy bins for material decomposition using an image domain approach. In this method, one image is reconstructed from projection data of each energy bin; thus, we have J images from J energy bins that are associated with attenuation coefficients with a narrow energy width. We assume that the spread of energies in each bin is small and thus that the attenuation can be modeled using an effective energy for each bin. This approximation allows us to linearize the problem, thus simplify the inversion procedure. We then fit J attenuation coefficients at each location x by the energy-attenuation function [5] and obtain either (1) photoelectric and Compton scattering components or (2) 2 or 3 basis-material components. We used computer simulations to evaluate this approach generating projection data with three types of acquisition schemes: (A) five monochromatic energies; (B) five energy bins with PCXD and an 80 kVp polychromatic x-ray spectrum; and (C) two kVp with an intensity integrating detector. Total attenuation coefficients of reconstructed images and calculated effective atomic numbers were compared with data published by National Institute of Standards and Technology (NIST). We developed a new materially defined "SmileyZ" phantom to evaluate the accuracy of the material decomposition methods. Preliminary results showed that material based 3-basis functions (bone, water and iodine) with PCXD with 5 energy bins was the most promising approach for material decomposition.
- Published
- 2007
- Full Text
- View/download PDF
14. Investigation of the use of photon counting x-ray detectors with energy discrimination capability for material decomposition in micro-computed tomography
- Author
-
Yong Du, Xiaolan Wang, Eric C. Frey, Jingyan Xu, Benjamin M. W. Tsui, and Katsuyuki Taguchi
- Subjects
Physics ,Accuracy and precision ,Photon ,Optics ,Pixel ,business.industry ,Detector ,Monte Carlo method ,X-ray detector ,business ,Energy (signal processing) ,Photon counting - Abstract
Recently developed solid-state detectors combined with high-speed ASICs that allow individual pixel pulse processing may prove useful as detectors for small animal micro-computed tomography. One appealing feature of these photon-counting x-ray detectors (PCXDs) is their ability to discriminate between photons with different energies and count them in a small number (2-5) of energy windows. The data in these energy windows may be thought of as arising from multiple simultaneous x-ray beams with individual energy spectra, and could thus potentially be used to perform material composition analysis. The goal of this paper was to investigate the potential advantages of PCXDs with multiple energy window counting capability as compared to traditional integrating detectors combined with acquisition of images using x-ray beams with 2 different kVps. For the PCXDs, we investigated 3 potential sources of crosstalk: scatter in the object and detector, limited energy resolution, and pulse piluep. Using Monte Carlo simulations, we showed that scatter in the object and detector results in relatively little crosstalk between the data in the energy windows. To study the effects of energy resolution and pulse-pileup, we performed simulations evaluating the accuracy and precision of basis decomposition using a detector with 2 or 5 energy windows and a single kVp compared to an dual kVp acquisitions with an integrating detector. We found that, for noisy data, the precision of estimating the thickness of two basis materials for a range of material compositions was better for the single kVp multiple energy window acquisitions compared to the dual kVp acquisitions with an integrating detector. The advantage of the multi-window acquisition was somewhat reduced when the energy resolution was reduced to 10 keV and when pulse pileup was included, but standard deviations of the estimated thicknesses remained better by more than a factor of 2.
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.