672 results
Search Results
2. A Novel Low Rank Smooth Flat-Field Correction Algorithm for Hyperspectral Microscopy Imaging.
- Author
-
Wang, Yukun, Gu, Yanfeng, and Li, Xiaomei
- Subjects
MICROSCOPY ,ALGORITHMS ,ADAPTIVE optics ,VIGNETTES - Abstract
A flat-field correction method is proposed for multiple measured hyperspectral microscopy imaging in this paper. As the most crucial preprocessing process in quantitative microscopic analysis, flat-field correction solves the uneven illumination caused by vignetting in microscopic images, and guarantees the precision of spatial and spectral information in hyperspectral microscopic imaging. In order to carry out flat-field correction and extract uneven illumination among groups of hyperspectral microscopic data containing hundreds of bands simultaneously, two properties of vignetting have been exploited: i) low-rank property is reflected by little information contained in vignetting; ii) local smoothness can be observed as a gradual change in brightness of vignetting, which is typically equivalent to the sparseness in spatial frequency domain. Combining the two properties above, a novel Low Rank Smooth Flat-field Correction (LRSFC) model modified from common orthogonal basis extraction is proposed, while an optimization is solved based on alternating direction multiplier method (ADMM), obtaining a unique flat-field term with low-rank and smooth properties. Qualitative and quantitative experimental assessments indicate that LRSFC does not add extra cell texture to the extracted flat-field term, whose performance appears prior to other state-of-the-art flat-field correction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. An Analytical Algorithm for Tensor Tomography From Projections Acquired About Three Axes.
- Author
-
Tao, Weijie, Rohmer, Damien, Gullberg, Grant T., Seo, Youngho, and Huang, Qiu
- Subjects
TENSOR fields ,TOMOGRAPHY ,DIFFUSION magnetic resonance imaging ,ALGORITHMS ,TISSUES ,TENSOR products ,HEART - Abstract
Tensor fields are useful for modeling the structure of biological tissues. The challenge to measure tensor fields involves acquiring sufficient data of scalar measurements that are physically achievable and reconstructing tensors from as few projections as possible for efficient applications in medical imaging. In this paper, we present a filtered back-projection algorithm for the reconstruction of a symmetric second-rank tensor field from directional X-ray projections about three axes. The tensor field is decomposed into a solenoidal and irrotational component, each of three unknowns. Using the Fourier projection theorem, a filtered back-projection algorithm is derived to reconstruct the solenoidal and irrotational components from projections acquired around three axes. A simple illustrative phantom consisting of two spherical shells and a 3D digital cardiac diffusion image obtained from diffusion tensor MRI of an excised human heart are used to simulate directional X-ray projections. The simulations validate the mathematical derivations and demonstrate reasonable noise properties of the algorithm. The decomposition of the tensor field into solenoidal and irrotational components provides insight into the development of algorithms for reconstructing tensor fields with sufficient samples in terms of the type of directional projections and the necessary orbits for the acquisition of the projections of the tensor field. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Automated 3-D Neuron Tracing With Precise Branch Erasing and Confidence Controlled Back Tracking.
- Author
-
Liu, Siqi, Zhang, Donghao, Song, Yang, Peng, Hanchuan, and Cai, Weidong
- Subjects
NEURONS ,MORPHOLOGY ,MICROSCOPY ,IMAGE processing ,ALGORITHMS - Abstract
The automatic reconstruction of single neurons from microscopic images is essential to enable large-scale data-driven investigations in neuron morphology research. However, few previous methods were able to generate satisfactory results automatically from 3-D microscopic images without human intervention. In this paper, we developed a new algorithm for automatic 3-D neuron reconstruction. The main idea of the proposed algorithm is to iteratively track backward from the potential neuronal termini to the soma centre. An online confidence score is computed to decide if a tracing iteration should be stopped and discarded from the final reconstruction. The performance improvements comparing with the previous methods are mainly introduced by a more accurate estimation of the traced area and the confidence controlled back-tracking algorithm. The proposed algorithm supports large-scale batch-processing by requiring only one user specified parameter for background segmentation. We bench tested the proposed algorithm on the images obtained from both the DIADEM challenge and the BigNeuron challenge. Our proposed algorithm achieved the state-of-the-art results. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. A Generalized Structured Low-Rank Matrix Completion Algorithm for MR Image Recovery.
- Author
-
Hu, Yue, Liu, Xiaohan, and Jacob, Mathews
- Subjects
PIECEWISE constant approximation ,LOW-rank matrices ,MATHEMATICAL regularization ,ORTHOGONAL matching pursuit ,MAGNETIC resonance imaging ,ALGORITHMS ,HANKEL functions - Abstract
Recent theory of mapping an image into a structured low-rank Toeplitz or Hankel matrix has become an effective method to restore images. In this paper, we introduce a generalized structured low-rank algorithm to recover images from their undersampled Fourier coefficients using infimal convolution regularizations. The image is modeled as the superposition of a piecewise constant component and a piecewise linear component. The Fourier coefficients of each component satisfy an annihilation relation, which results in a structured Toeplitz matrix. We exploit the low-rank property of the matrices to formulate a combined regularized optimization problem. In order to solve the problem efficiently and to avoid the high-memory demand resulting from the large-scale Toeplitz matrices, we introduce a fast and a memory-efficient algorithm based on the half-circulant approximation of the Toeplitz matrix. We demonstrate our algorithm in the context of single and multi-channel MR images recovery. Numerical experiments indicate that the proposed algorithm provides improved recovery performance over the state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Hip Landmark Detection With Dependency Mining in Ultrasound Image.
- Author
-
Xu, Jingyuan, Xie, Hongtao, Liu, Chuanbin, Yang, Fang, Zhang, Sicheng, Chen, Xun, and Zhang, Yongdong
- Subjects
ULTRASONIC imaging ,CRYPTOCURRENCY mining ,ALGORITHMS ,INFANT diseases ,DIAGNOSIS - Abstract
Developmental dysplasia of the hip (DDH) is a common and serious disease in infants. Hip landmark detection plays a critical role in diagnosing the development of neonatal hip in the ultrasound image. However, the local confusion and the regional weakening make this task challenging. To solve these challenges, we explore the stable hip structure and the distinguishable local features to provide dependencies for hip landmark detection. In this paper, we propose a novel architecture named Dependency Mining ResNet (DM-ResNet), which investigates end-to-end dependency mining for more accurate and much faster hip landmark detection. First of all, we convert the landmark detection to the heatmap estimation by ResNet to build a strong baseline architecture for fast and accurate detection. Secondly, a dependency mining module is explored to mine the dependencies and leverage both the local and global information to decline the local confusion and strengthen the weakening region. Thirdly, we propose a simple but effective local voting algorithm (LVA) that seeks trade-off between long-range and short-range dependencies in the hip ultrasound image. Besides, a dataset with 2000 annotated hip ultrasound images is constructed in our work. It is the first public hip ultrasound dataset for open research. Experimental results show that our method achieves excellent precision in hip landmark detection (average point error of 0.719mm and successful detection rate within 1mm of 79.9%). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Learned Low-Rank Priors in Dynamic MR Imaging.
- Author
-
Ke, Ziwen, Huang, Wenqi, Cui, Zhuo-Xu, Cheng, Jing, Jia, Sen, Wang, Haifeng, Liu, Xin, Zheng, Hairong, Ying, Leslie, Zhu, Yanjie, and Liang, Dong
- Subjects
MAGNETIC resonance imaging ,THRESHOLDING algorithms ,FLOWGRAPHS ,DEEP learning ,ALGORITHMS - Abstract
Deep learning methods have achieved attractive performance in dynamic MR cine imaging. However, most of these methods are driven only by the sparse prior of MR images, while the important low-rank (LR) prior of dynamic MR cine images is not explored, which may limit further improvements in dynamic MR reconstruction. In this paper, a learned singular value thresholding (Learned-SVT) operator is proposed to explore low-rank priors in dynamic MR imaging to obtain improved reconstruction results. In particular, we put forward a model-based unrolling sparse and low-rank network for dynamic MR imaging, dubbed as SLR-Net. SLR-Net is defined over a deep network flow graph, which is unrolled from the iterative procedures in the iterative shrinkage-thresholding algorithm (ISTA) for optimizing a sparse and LR-based dynamic MRI model. Experimental results on a single-coil scenario show that the proposed SLR-Net can further improve the state-of-the-art compressed sensing (CS) methods and sparsity-driven deep learning-based methods with strong robustness to different undersampling patterns, both qualitatively and quantitatively. Besides, SLR-Net has been extended to a multi-coil scenario, and achieved excellent reconstruction results compared with a sparsity-driven multi-coil deep learning-based method under a high acceleration. Prospective reconstruction results on an open real-time dataset further demonstrate the capability and flexibility of the proposed method on real-time scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Efficient Enhancement of Stereo Endoscopic Images Based on Joint Wavelet Decomposition and Binocular Combination.
- Author
-
Sdiri, Bilel, Kaaniche, Mounir, Cheikh, Faouzi Alaya, Beghdadi, Azeddine, and Elle, Ole Jakob
- Subjects
IMAGE processing ,ALGORITHMS ,WAVELET transforms ,DIGITAL image processing ,SURGICAL complications - Abstract
The success of minimally invasive interventions and the remarkable technological and medical progress have made endoscopic image enhancement a very active research field. Due to the intrinsic endoscopic domain characteristics and the surgical exercise, stereo endoscopic images may suffer from different degradations which affect its quality. Therefore, in order to provide the surgeons with a better visual feedback and improve the outcomes of possible subsequent processing steps, namely, a 3-D organ reconstruction/registration, it would be interesting to improve the stereo endoscopic image quality. To this end, we propose, in this paper, two joint enhancement methods which operate in the wavelet transform domain. More precisely, by resorting to a joint wavelet decomposition, the wavelet subbands of the right and left views are simultaneously processed to exploit the binocular vision properties. While the first proposed technique combines only the approximation subbands of both views, the second method combines all the wavelet subbands yielding an inter-view processing fully adapted to the local features of the stereo endoscopic images. Experimental results, carried out on various stereo endoscopic datasets, have demonstrated the efficiency of the proposed enhancement methods in terms of perceived visual image quality. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. NOVIFAST: A Fast Algorithm for Accurate and Precise VFA MRI ${T}_{1}$ Mapping.
- Author
-
Ramos-Llorden, Gabriel, Vegas-Sanchez-Ferrero, Gonzalo, Bjork, Marcus, Vanhevel, Floris, Parizel, Paul M., San Jose Estepar, Raul, den Dekker, Arnold J., and Sijbers, Jan
- Subjects
MAGNETIC resonance imaging ,ALGORITHMS ,DIAGNOSTIC imaging ,MEDICAL imaging systems ,IMAGE processing - Abstract
In quantitative magnetic resonance ${T}_{\textsf {1}}$ mapping, the variable flip angle (VFA) steady state spoiled gradient recalled echo (SPGR) imaging technique is popular as it provides a series of high resolution ${T}_{\textsf {1}}$ weighted images in a clinically feasible time. Fast, linear methods that estimate ${T}_{\textsf {1}}$ maps from these weighted images have been proposed, such as DESPOT1 and iterative re-weighted linear least squares. More accurate, non-linear least squares (NLLS) estimators are in play, but these are generally much slower and require careful initialization. In this paper, we present NOVIFAST, a novel NLLS-based algorithm specifically tailored to VFA SPGR ${T}_{\textsf {1}}$ mapping. By exploiting the particular structure of the SPGR model, a computationally efficient, yet accurate and precise ${T}_{\textsf {1}}$ map estimator is derived. Simulation and in vivo human brain experiments demonstrate a twenty-fold speed gain of NOVIFAST compared with conventional gradient-based NLLS estimators while maintaining a high precision and accuracy. Moreover, NOVIFAST is eight times faster than the efficient implementations of the variable projection (VARPRO) method. Furthermore, NOVIFAST is shown to be robust against initialization. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Spatially Consistent Supervoxel Correspondences of Cone-Beam Computed Tomography Images.
- Author
-
Pei, Yuru, Yi, Yunai, Ma, Gengyu, Kim, Tae-Kyun, Guo, Yuke, Xu, Tianmin, and Zha, Hongbin
- Subjects
CONE beam computed tomography ,ORTHODONTICS ,ALGORITHMS ,MACHINE learning ,RANDOM forest algorithms - Abstract
Establishing dense correspondences of cone-beam computed tomography (CBCT) images is a crucial step for the attribute transfer and morphological variation assessment in clinical orthodontics. In this paper, a novel method, unsupervised spatially consistent clustering forest, is proposed to tackle the challenges for automatic supervoxel-wise correspondences of CBCT images. A complexity analysis of the proposed method with respect to the clustering hypotheses is provided with a data-dependent learning guarantee. The learning bound considers both the sequential tree traversals determined by questions stored in branch nodes and the clustering compactness of leaf nodes. A novel tree-pruning algorithm, guided by the learning bound, is also proposed to remove locally inconsistent leaf nodes. The resulting forest yields spatially consistent affinity estimations, thanks to the pruning penalizing trees with inconsistent leaf assignments and the combinational contextual feature channels used to learn the forest. A forest-based metric is utilized to derive the pairwise affinities and dense correspondences of CBCT images. The proposed method has been applied to the label propagation of clinically captured CBCT images. In the experiments, the method outperforms variants of both supervised and unsupervised forest-based methods and state-of-the-art label-propagation methods, achieving the mean dice similarity coefficients of 0.92, 0.89, 0.94, and 0.93 for the mandible, the maxilla, the zygoma arch, and the teeth data, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. Modeling of Retinal Optical Coherence Tomography Based on Stochastic Differential Equations: Application to Denoising.
- Author
-
Tajmirriahi, Mahnoosh, Amini, Zahra, Hamidi, Arsham, Zam, Azhar, and Rabbani, Hossein
- Subjects
OPTICAL coherence tomography ,IMAGE denoising ,LAPLACIAN operator ,DIFFERENTIAL operators ,ALGORITHMS ,SPECKLE interference - Abstract
In this paper a statistical modeling, based on stochastic differential equations (SDEs), is proposed for retinal Optical Coherence Tomography (OCT) images. In this method, pixel intensities of image are considered as discrete realizations of a Levy stable process. This process has independent increments and can be expressed as response of SDE to a white symmetric alpha stable (s $\boldsymbol {\alpha }\text{s}$) noise. Based on this assumption, applying appropriate differential operator makes intensities statistically independent. Mentioned white stable noise can be regenerated by applying fractional Laplacian operator to image intensities. In this way, we modeled OCT images as s $\boldsymbol {\alpha }\text{s}$ distribution. We applied fractional Laplacian operator to image and fitted s $\boldsymbol {\alpha }\text{s}$ to its histogram. Statistical tests were used to evaluate goodness of fit of stable distribution and its heavy tailed and stability characteristics. We used modeled s $\boldsymbol {\alpha }\text{s}$ distribution as prior information in maximum a posteriori (MAP) estimator in order to reduce the speckle noise of OCT images. Such a statistically independent prior distribution simplified denoising optimization problem to a regularization algorithm with an adjustable shrinkage operator for each image. Alternating Direction Method of Multipliers (ADMM) algorithm was utilized to solve the denoising problem. We presented visual and quantitative evaluation results of the performance of this modeling and denoising methods for normal and abnormal images. Applying parameters of model in classification task as well as indicating effect of denoising in layer segmentation improvement illustrates that the proposed method describes OCT data more accurately than other models that do not remove statistical dependencies between pixel intensities. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Uncertainty-Aware Annotation Protocol to Evaluate Deformable Registration Algorithms.
- Author
-
Peter, Loic, Alexander, Daniel C., Magnain, Caroline, and Iglesias, Juan Eugenio
- Subjects
IMAGE registration ,ANNOTATIONS ,RECORDING & registration ,SOURCE code ,ALGORITHMS ,ENGINEERING standards - Abstract
Landmark correspondences are a widely used type of gold standard in image registration. However, the manual placement of corresponding points is subject to high inter-user variability in the chosen annotated locations and in the interpretation of visual ambiguities. In this paper, we introduce a principled strategy for the construction of a gold standard in deformable registration. Our framework: (i) iteratively suggests the most informative location to annotate next, taking into account its redundancy with previous annotations; (ii) extends traditional pointwise annotations by accounting for the spatial uncertainty of each annotation, which can either be directly specified by the user, or aggregated from pointwise annotations from multiple experts; and (iii) naturally provides a new strategy for the evaluation of deformable registration algorithms. Our approach is validated on four different registration tasks. The experimental results show the efficacy of suggesting annotations according to their informativeness, and an improved capacity to assess the quality of the outputs of registration algorithms. In addition, our approach yields, from sparse annotations only, a dense visualization of the errors made by a registration method. The source code of our approach supporting both 2D and 3D data is publicly available at https://github.com/LoicPeter/evaluation-deformable-registration. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Real-Time Task Recognition in Cataract Surgery Videos Using Adaptive Spatiotemporal Polynomials.
- Author
-
Quellec, Gwenole, Lamard, Mathieu, Cochener, Beatrice, and Cazuguel, Guy
- Subjects
CATARACT surgery ,OPHTHALMIC surgery ,EYE physiology ,FUZZY systems ,ALGORITHMS ,SPATIOTEMPORAL processes - Abstract
This paper introduces a new algorithm for recognizing surgical tasks in real-time in a video stream. The goal is to communicate information to the surgeon in due time during a video-monitored surgery. The proposed algorithm is applied to cataract surgery, which is the most common eye surgery. To compensate for eye motion and zoom level variations, cataract surgery videos are first normalized. Then, the motion content of short video subsequences is characterized with spatiotemporal polynomials: a multiscale motion characterization based on adaptive spatiotemporal polynomials is presented. The proposed solution is particularly suited to characterize deformable moving objects with fuzzy borders, which are typically found in surgical videos. Given a target surgical task, the system is trained to identify which spatiotemporal polynomials are usually extracted from videos when and only when this task is being performed. These key spatiotemporal polynomials are then searched in new videos to recognize the target surgical task. For improved performances, the system jointly adapts the spatiotemporal polynomial basis and identifies the key spatiotemporal polynomials using the multiple-instance learning paradigm. The proposed system runs in real-time and outperforms the previous solution from our group, both for surgical task recognition (A_z = 0.851 on average, as opposed to A_z = 0.794 previously) and for the joint segmentation and recognition of surgical tasks (A_z = 0.856 on average, as opposed to A_z = 0.832 previously). [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
14. Validation of a Nonrigid Registration Error Detection Algorithm Using Clinical MRI Brain Data.
- Author
-
Datteri, Ryan D., Liu, Yuan, D'Haese, Pierre-Francois, and Dawant, Benoit M.
- Subjects
MAGNETIC resonance imaging of the brain ,DATA analysis ,ERROR detection (Information theory) ,ALGORITHMS ,DIAGNOSTIC imaging ,STATISTICAL correlation - Abstract
Identification of error in nonrigid registration is a critical problem in the medical image processing community. We recently proposed an algorithm that we call “Assessing Quality Using Image Registration Circuits” (AQUIRC) to identify nonrigid registration errors and have tested its performance using simulated cases. In this paper, we extend our previous work to assess AQUIRC's ability to detect local nonrigid registration errors and validate it quantitatively at specific clinical landmarks, namely the anterior commissure and the posterior commissure. To test our approach on a representative range of error we utilize five different registration methods and use 100 target images and nine atlas images. Our results show that AQUIRC's measure of registration quality correlates with the true target registration error (TRE) at these selected landmarks with an R^2=0.542. To compare our method to a more conventional approach, we compute local normalized correlation coefficient (LNCC) and show that AQUIRC performs similarly. However, a multi-linear regression performed with both AQUIRC's measure and LNCC shows a higher correlation with TRE than correlations obtained with either measure alone, thus showing the complementarity of these quality measures. We conclude the paper by showing that the AQUIRC algorithm can be used to reduce registration errors for all five algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. Online Robust Projective Dictionary Learning: Shape Modeling for MR-TRUS Registration.
- Author
-
Wang, Yi, Zheng, Qingqing, and Heng, Pheng Ann
- Subjects
PROSTATE cancer ,DIMENSION reduction (Statistics) ,ALGORITHMS ,ENDORECTAL ultrasonography ,CANCER-related mortality ,MAGNETIC resonance imaging - Abstract
Robust and effective shape prior modeling from a set of training data remains a challenging task, since the shape variation is complicated, and shape models should preserve local details as well as handle shape noises. To address these challenges, a novel robust projective dictionary learning (RPDL) scheme is proposed in this paper. Specifically, the RPDL method integrates the dimension reduction and dictionary learning into a unified framework for shape prior modeling, which can not only learn a robust and representative dictionary with the energy preservation of the training data, but also reduce the dimensionality and computational cost via the subspace learning. In addition, the proposed RPDL algorithm is regularized by using the \ell 1 norm to handle the outliers and noises, and is embedded in an online framework so that of memory and time efficiency. The proposed method is employed to model prostate shape prior for the application of magnetic resonance transrectal ultrasound registration. The experimental results demonstrate that our method provides more accurate and robust shape modeling than the state-of-the-art methods do. The proposed RPDL method is applicable for modeling other organs, and hence, a general solution for the problem of shape prior modeling. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
16. Frequency-Selective Computed Tomography: Applications During Periodic Thoracic Motion.
- Author
-
Herrmann, Jacob, Hoffman, Eric A., and Kaczka, David W.
- Subjects
COMPUTED tomography ,PULMONARY ventilation-perfusion scans ,ALGORITHMS ,X-ray diffraction ,CHEST X rays - Abstract
We seek to use computed tomography (CT) to characterize regional lung parenchymal deformation during high-frequency and multi-frequency oscillatory ventilation. Periodic motion of thoracic structures results in artifacts of CT images obtained by standard reconstruction algorithms, especially for frequencies exceeding that of the X-ray source rotation. In this paper, we propose an acquisition and reconstruction technique for high-resolution imaging of the thorax during periodic motion. Our technique relies on phase-binning projections according to the frequency of subject motion relative to the scanner rotation, prior to volumetric reconstruction. The mathematical theory and limitations of the proposed technique are presented, and then validated in a simulated phantom as well as a living porcine subject during oscillatory ventilation. The 4-D image sequences obtained using this frequency-selective reconstruction technique yielded high-spatio-temporal resolution of the thorax during periodic motion. We conclude that the frequency-based selection of CT projections is ideal for characterizing dynamic deformations of thoracic structures that are ordinarily obscured by motion artifact using conventional reconstruction techniques. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
17. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification.
- Author
-
Song, Yang, Li, Qing, Huang, Heng, Feng, Dagan, Chen, Mei, and Cai, Weidong
- Subjects
DIAGNOSTIC imaging ,MICROSCOPY ,BIOMEDICAL materials ,BREAST cancer magnetic resonance imaging ,ALGORITHMS - Abstract
Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
18. Multi-Grained Random Fields for Mitosis Identification in Time-Lapse Phase Contrast Microscopy Image Sequences.
- Author
-
Liu, An-An, Tang, Jinhui, Nie, Weizhi, and Su, Yuting
- Subjects
RANDOM fields ,MITOSIS ,CELL division ,DIAGNOSTIC imaging ,ALGORITHMS - Abstract
This paper proposes a multi-grained random fields (MGRFs) model for mitosis identification. To deal with the difficulty in hidden state discovery and sequential structure modeling in mitosis sequences only containing gradual visual pattern changes, we design the graphical structure to transform individual sequence into a set of coarse-to-fine grained sequencesconveying diverse temporal dynamics. Furthermore, we propose the corresponding probabilistic model for joint temporal learning and feature learning. To deal with the non-convex formulation of MGRF, we decomposemodel training into two sub-tasks, layer-wise sequential learning of both temporal dynamics and visual feature and new layer generation by graph-based sequential grouping, and optimize the model by alternating between them iteratively. The proposed method is validated on very challenging mitosis data set of C3H10T1/2 and C2C12 stem cells. Extensive comparison experiments demonstrate its superiority to the state of the arts. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
19. Accelerating Ordered Subsets Image Reconstruction for X-ray CT Using Spatially Nonuniform Optimization Transfer.
- Author
-
Kim, Donghwan, Pal, Debashish, Thibault, Jean-Baptiste, and Fessler, Jeffrey A.
- Subjects
SUBSET selection ,IMAGE reconstruction ,COMPUTED tomography ,X-ray imaging ,MATHEMATICAL optimization ,ALGORITHMS ,ITERATIVE methods (Mathematics) - Abstract
Statistical image reconstruction algorithms in X-ray computed tomography (CT) provide improved image quality for reduced dose levels but require substantial computation time. Iterative algorithms that converge in few iterations and that are amenable to massive parallelization are favorable in multiprocessor implementations. The separable quadratic surrogate (SQS) algorithm is desirable as it is simple and updates all voxels simultaneously. However, the standard SQS algorithm requires many iterations to converge. This paper proposes an extension of the SQS algorithm that leads to spatially nonuniform updates. The nonuniform (NU) SQS encourages larger step sizes for the voxels that are expected to change more between the current and the final image, accelerating convergence, while the derivation of NU-SQS guarantees monotonic descent. Ordered subsets (OS) algorithms can also accelerate SQS, provided suitable “subset balance” conditions hold. These conditions can fail in 3-D helical cone-beam CT due to incomplete sampling outside the axial region-of-interest (ROI). This paper proposes a modified OS algorithm that is more stable outside the ROI in helical CT. We use CT scans to demonstrate that the proposed NU-OS-SQS algorithm handles the helical geometry better than the conventional OS methods and “converges” in less than half the time of ordinary OS-SQS. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
20. Local-Mean Preserving Post-Processing Step for Non-Negativity Enforcement in PET Imaging: Application to 90Y-PET.
- Author
-
Millardet, Mael, Moussaoui, Said, Mateus, Diana, Idier, Jerome, and Carlier, Thomas
- Subjects
LINEAR programming ,ALGORITHMS ,IMAGE reconstruction ,CANNING & preserving ,PRICE increases - Abstract
In a low-statistics PET imaging context, the positive bias in regions of low activity is a burning issue. To overcome this problem, algorithms without the built-in non-negativity constraint may be used. They allow negative voxels in the image to reduce, or even to cancel the bias. However, such algorithms increase the variance and are difficult to interpret since the resulting images contain negative activities, which do not hold a physical meaning when dealing with radioactive concentration. In this paper, a post-processing approach is proposed to remove these negative values while preserving the local mean activities. Its original idea is to transfer the value of each voxel with negative activity to its direct neighbors under the constraint of preserving the local means of the image. In that respect, the proposed approach is formalized as a linear programming problem with a specific symmetric structure, which makes it solvable in a very efficient way by a dual-simplex-like iterative algorithm. The relevance of the proposed approach is discussed on simulated and on experimental data. Acquired data from an yttrium-90 phantom show that on images produced by a non-constrained algorithm, a much lower variance in the cold area is obtained after the post-processing step, at the price of a slightly increased bias. More specifically, when compared with the classical OSEM algorithm, images are improved, both in terms of bias and of variance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Differentiated Backprojection Domain Deep Learning for Conebeam Artifact Removal.
- Author
-
Han, Yoseob, Kim, Junyoung, and Ye, Jong Chul
- Subjects
ALGORITHMS ,HILBERT transform ,DECONVOLUTION (Mathematics) ,DEEP learning ,IMAGE reconstruction ,GEOMETRY - Abstract
Conebeam CT using a circular trajectory is quite often used for various applications due to its relative simple geometry. For conebeam geometry, Feldkamp, Davis and Kress algorithm is regarded as the standard reconstruction method, but this algorithm suffers from so-called conebeam artifacts as the cone angle increases. Various model-based iterative reconstruction methods have been developed to reduce the cone-beam artifacts, but these algorithms usually require multiple applications of computational expensive forward and backprojections. In this paper, we develop a novel deep learning approach for accurate conebeam artifact removal. In particular, our deep network, designed on the differentiated backprojection domain, performs a data-driven inversion of an ill-posed deconvolution problem associated with the Hilbert transform. The reconstruction results along the coronal and sagittal directions are then combined using a spectral blending technique to minimize the spectral leakage. Experimental results under various conditions confirmed that our method generalizes well and outperforms the existing iterative methods despite significantly reduced runtime complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Detecting Deficient Coverage in Colonoscopies.
- Author
-
Freedman, Daniel, Blau, Yochai, Katzir, Liran, Aides, Amit, Shimshoni, Ilan, Veikherman, Danny, Golany, Tomer, Gordon, Ariel, Corrado, Greg, Matias, Yossi, and Rivlin, Ehud
- Subjects
COLON (Anatomy) ,ALGORITHMS ,COLON cancer ,STREAMING media ,COLONOSCOPY - Abstract
Colonoscopy is tool of choice for preventing Colorectal Cancer, by detecting and removing polyps before they become cancerous. However, colonoscopy is hampered by the fact that endoscopists routinely miss 22-28% of polyps. While some of these missed polyps appear in the endoscopist’s field of view, others are missed simply because of substandard coverage of the procedure, i.e. not all of the colon is seen. This paper attempts to rectify the problem of substandard coverage in colonoscopy through the introduction of the C2D2 (Colonoscopy Coverage Deficiency via Depth) algorithm which detects deficient coverage, and can thereby alert the endoscopist to revisit a given area. More specifically, C2D2 consists of two separate algorithms: the first performs depth estimation of the colon given an ordinary RGB video stream; while the second computes coverage given these depth estimates. Rather than compute coverage for the entire colon, our algorithm computes coverage locally, on a segment-by-segment basis; C2D2 can then indicate in real-time whether a particular area of the colon has suffered from deficient coverage, and if so the endoscopist can return to that area. Our coverage algorithm is the first such algorithm to be evaluated in a large-scale way; while our depth estimation technique is the first calibration-free unsupervised method applied to colonoscopies. The C2D2 algorithm achieves state of the art results in the detection of deficient coverage. On synthetic sequences with ground truth, it is 2.4 times more accurate than human experts; while on real sequences, C2D2 achieves a 93.0% agreement with experts. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Deep Spatial-Temporal Feature Fusion From Adaptive Dynamic Functional Connectivity for MCI Identification.
- Author
-
Li, Yang, Liu, Jingyu, Tang, Zhenyu, and Lei, Baiying
- Subjects
FUNCTIONAL connectivity ,FUNCTIONAL magnetic resonance imaging ,MILD cognitive impairment ,BRAIN abnormalities ,ALGORITHMS ,SELF-adaptive software - Abstract
Dynamic functional connectivity (dFC) analysis using resting-state functional Magnetic Resonance Imaging (rs-fMRI) is currently an advanced technique for capturing the dynamic changes of neural activities in brain disease identification. Most existing dFC modeling methods extract dynamic interaction information by using the sliding window-based correlation, whose performance is very sensitive to window parameters. Because few studies can convincingly identify the optimal combination of window parameters, sliding window-based correlation may not be the optimal way to capture the temporal variability of brain activity. In this paper, we propose a novel adaptive dFC model, aided by a deep spatial-temporal feature fusion method, for mild cognitive impairment (MCI) identification. Specifically, we adopt an adaptive Ultra-weighted-lasso recursive least squares algorithm to estimate the adaptive dFC, which effectively alleviates the problem of parameter optimization. Then, we extract temporal and spatial features from the adaptive dFC. In order to generate coarser multi-domain representations for subsequent classification, the temporal and spatial features are further mapped into comprehensive fused features with a deep feature fusion method. Experimental results show that the classification accuracy of our proposed method is reached to 87.7%, which is at least 5.5% improvement than the state-of-the-art methods. These results elucidate the superiority of the proposed method for MCI classification, indicating its effectiveness in the early identification of brain abnormalities. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.
- Author
-
Hamilton, Sarah J., Mueller, J. L., and Alsaker, M.
- Subjects
ELECTRICAL impedance tomography ,PERMITTIVITY ,INVERSE problems ,FOURIER transforms ,LOWPASS electric filters ,REGULARIZATION parameter ,PLEURAL effusions ,PNEUMOTHORAX - Abstract
Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
25. B-Mode Ultrasound Image Simulation in Deformable 3-D Medium.
- Author
-
Goksel, Orcun and Salcudean, Septimiu B.
- Subjects
THREE-dimensional imaging ,MEDICAL imaging systems ,ULTRASONIC imaging ,ALGORITHMS ,INTERPOLATION ,DEFORMATIONS (Mechanics) ,QUANTITATIVE research - Abstract
This paper presents an algorithm for fast image synthesis inside deformed volumes. Given the node displacements of a mesh and a reference 3-D image dataset of a predeformed volume, the method first maps the image pixels that need to be synthesized from the deformed configuration to the nominal predeformed configuration, where the pixel intensities are obtained easily through interpolation in the regular-grid structure of the reference voxel volume. This mapping requires the identification of the mesh element enclosing each pixel for every image frame. To accelerate this point location operation, a fast method of projecting the deformed mesh on image pixels is introduced in this paper. The method presented was implemented for ultrasound B-mode image simulation of a synthetic tissue phantom. The phantom deformation as a result of ultrasound probe motion was modeled using the finite element method. Experimental images of the phantom under deformation were then compared with the corresponding synthesized images using sum of squared differences and mutual information metrics. Both this quantitative comparison and a qualitative assessment show that realistic images can be synthesized using the proposed technique. An ultrasound examination system was also implemented to demonstrate that real-time image synthesis with the proposed technique can be successfully integrated into a haptic simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
26. An Implementation of Calderón's Method for 3-D Limited-View EIT.
- Author
-
Boverman, Gregory, Tzu-Jen Kao, Isaacson, David, and Saulnier, Gary J.
- Subjects
MEDICAL screening ,IMAGE reconstruction ,FOURIER transforms ,ALGORITHMS ,THREE-dimensional imaging ,DIGITAL image processing - Abstract
Mathematical interest in electrical impedance tomography has been strong since the publication of Calderón's foundational paper. This paper introduced the idea of applying external voltage patterns to a medium such that, assuming that the medium is sufficiently close to a constant admittivity, the reconstruction can be accomplished directly by inverse Fourier transform. Motivated by Calderón's method, we have developed a variant of the algorithm which is applicable to the case of measurement on only a part of the boundary and on discrete electrodes. Here we determine voltage or current patterns to apply to the electrodes which optimally approximate Calderón's special functions in the interior. Furthermore, in three dimensions and higher, Calderón's method allows each point in Fourier space to be computed in a multiplicity of ways. We show that by making use of the inherent redundancy in our measurements, we can significantly improve the quality of the static images produced by our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
27. Multiple Delay and Sum With Enveloping Beamforming Algorithm for Photoacoustic Imaging.
- Author
-
Ma, Xiang, Peng, Chenglei, Yuan, Jie, Cheng, Qian, Xu, Guan, Wang, Xueding, and Carson, Paul L.
- Subjects
ACOUSTIC imaging ,PHOTOACOUSTIC spectroscopy ,BEAMFORMING ,PIXELS ,IMAGING phantoms ,FINGER joint ,PARALLEL programming ,ALGORITHMS - Abstract
Delay and Sum (DAS) is one of the most common beamforming algorithms for photoacoustic imaging (PAI) reconstruction. Based on calculating beamformed signal with simple delaying and summing, DAS can function in a quick response and is quite suitable for real-time PAI. However, high sidelobes and intense artifacts may appear when using DAS due to summing with unnecessary data. In this paper, a beamforming algorithm called Multiple Delay and Sum with Enveloping (multi-DASE) is introduced to solve this problem. Compared to DAS, the multi-DASE algorithm calculates not only the initial value of the beamformed signal but also the complete N-shaped photoacoustic signal for each pixel. Through computer simulation, a phantom experiment and experiment on human finger joint, the multi-DASE algorithm is compared with other beamforming methods in removing artifacts by evaluating the quality of the reconstructed images. Furthermore, by rearranging the calculation sequences, the multi-DASE algorithm can be computing in parallel using GPU acceleration to meet the needs of real-time clinical application. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. Algorithms and Analyses for Joint Spectral Image Reconstruction in Y-90 Bremsstrahlung SPECT.
- Author
-
Chun, Se Young, Nguyen, Minh Phuong, Phan, Thanh Quoc, Kim, Hanvit, Fessler, Jeffrey A., and Dewaraja, Yuni K.
- Subjects
BREMSSTRAHLUNG ,SPECTRAL imaging ,PHOTON emission ,DIGITAL computer simulation ,ALGORITHMS ,IMAGE reconstruction ,SINGLE-photon emission computed tomography - Abstract
Quantitative yttrium-90 (Y-90) SPECT imaging is challenging due to the nature of Y-90, an almost pure beta emitter that is associated with a continuous spectrum of bremsstrahlung photons that have a relatively low yield. This paper proposes joint spectral reconstruction (JSR), a novel bremsstrahlung SPECT reconstruction method that uses multiple narrow acquisition windows with accurate multi-band forward modeling to cover a wide range of the energy spectrum. Theoretical analyses using Fisher information and Monte-Carlo (MC) simulation with a digital phantom show that the proposed JSR model with multiple acquisition windows has better performance in terms of covariance (precision) than previous methods using multi-band forward modeling with a single acquisition window, or using a single-band forward modeling with a single acquisition window. We also propose an energy-window subset (ES) algorithm for JSR to achieve fast empirical convergence and maximum-likelihood based initialization for all reconstruction methods to improve quantification accuracy in early iterations. For both MC simulation with a digital phantom and experimental study with a physical multi-sphere phantom, our proposed JSR-ES, a fast algorithm for JSR with ES, yielded higher recovery coefficients (RCs) on hot spheres over all iterations and sphere sizes than all the other evaluated methods, due to fast empirical convergence. In experimental study, for the smallest hot sphere (diameter 1.6cm), at the 20th iteration the increase in RCs with JSR-ES was 66 and 31% compared with single wide and narrow band forward models, respectively. JSR-ES also yielded lower residual count error (RCE) on a cold sphere over all iterations than other methods for MC simulation with known scatter, but led to greater RCE compared with single narrow band forward model at higher iterations for experimental study when using estimated scatter. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. VVBP-Tensor in the FBP Algorithm: Its Properties and Application in Low-Dose CT Reconstruction.
- Author
-
Tao, Xi, Zhang, Hua, Wang, Yongbo, Yan, Gang, Zeng, Dong, Chen, Wufan, and Ma, Jianhua
- Subjects
COMPUTED tomography ,SINGULAR value decomposition ,IMAGE reconstruction ,IMAGE reconstruction algorithms ,ALGORITHMS ,RADIATION doses - Abstract
For decades, commercial X-ray computed tomography (CT) scanners have been using the filtered backprojection (FBP) algorithm for image reconstruction. However, the desire for lower radiation doses has pushed the FBP algorithm to its limit. Previous studies have made significant efforts to improve the results of FBP through preprocessing the sinogram, modifying the ramp filter, or postprocessing the reconstructed images. In this paper, we focus on analyzing and processing the stacked view-by-view backprojections (named VVBP-Tensor) in the FBP algorithm. A key challenge for our analysis lies in the radial structures in each backprojection slice. To overcome this difficulty, a sorting operation was introduced to the VVBP-Tensor in its ${z}$ direction (the direction of the projection views). The results show that, after sorting, the tensor contains structures that are similar to those of the object, and structures in different slices of the tensor are correlated. We then analyzed the properties of the VVBP-Tensor, including structural self-similarity, tensor sparsity, and noise statistics. Considering these properties, we have developed an algorithm using the tensor singular value decomposition (named VVBP-tSVD) to denoise the VVBP-Tensor for low-mAs CT imaging. Experiments were conducted using a physical phantom and clinical patient data with different mAs levels. The results demonstrate that the VVBP-tSVD is superior to all competing methods under different reconstruction schemes, including sinogram preprocessing, image postprocessing, and iterative reconstruction. We conclude that the VVBP-Tensor is a suitable processing target for improving the quality of FBP reconstruction, and the proposed VVBP-tSVD is an effective algorithm for noise reduction in low-mAs CT imaging. This preliminary work might provide a heuristic perspective for reviewing and rethinking the FBP algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Feature-Preserving MRI Denoising: A Nonparametric Empirical Bayes Approach.
- Author
-
Awate, Suyash P. and Whitaker, Ross T.
- Subjects
MAGNETIC resonance imaging ,NONPARAMETRIC statistics ,INFORMATION theory ,MARKOV random fields ,ALGORITHMS - Abstract
This paper presents a novel method for Bayesian denoising of magnetic resonance (MR) images that bootstraps itself by inferring the prior, i.e., the uncorrupted-image statistics, from the corrupted input data and the knowledge of the Rician noise model. The proposed method relies on principles from empirical Bayes (EB) estimation. It models the prior in a nonparametric Markov random field (MRF) framework and estimates this prior by optimizing an information-theoretic metric using the expectation-maximization algorithm. The generality and power of nonparametric modeling, coupled with the EB approach for prior estimation, avoids imposing ill-fitting prior models for denoising. The results demonstrate that, unlike typical denoising methods, the proposed method preserves most of the important features in brain MR images. Furthermore, this paper presents a novel Bayesian-inference algorithm on MRFs, namely iterated conditional entropy reduction (ICER). This paper also extends the application of the proposed method for denoising diffusion-weighted MR images. Validation results and quantitative comparisons with the state of the art in MR-image denoising clearly depict the advantages of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. Convergent Incremental Optimization Transfer Algorithms: Application to Tomography.
- Author
-
Ahn, Sangtae, Fessler, Jeffrey A., Blatt, Doron, and Hero, Alfred O.
- Subjects
TOMOGRAPHY ,ALGORITHMS ,POSITRON emission tomography ,ALGEBRA ,IMAGE processing ,MEDICAL imaging systems - Abstract
No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters [1], and methods based on the incremental expectation-maximization (EM) approach [2]. This paper generalizes the incremental EM approach [3] by introducing a general framework, "incremental optimization transfer." The proposed algorithms accelerate convergence speeds and ensure global convergence without requiring relaxation parameters. The general optimization transfer framework allows the use of a very broad family of surrogate functions, enabling the development of new algorithms [4]. This paper provides the first convergent OS-type algorithm for (nonconcave) penalized-likelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS) [5] which yield closed-form maximization steps. We found it is very effective to achieve fast convergence rates by starting with an OS algorithm with a large number of subsets and switching to the new "transmission incremental optimization transfer (TRIOT)" algorithm. Results show that TRIOT is faster in increasing the PL objective than nonincremental ordinary SPS and even OS-SPS yet is convergent. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
32. A Registration Framework for the Comparison of Mammogram Sequences.
- Author
-
Marias, Kostas, Behrenbruch, Christian, Parbhoo, Santilal, Seifalian, Alexander, and Brady, Michael
- Subjects
MAMMOGRAMS ,HORMONE therapy for menopause ,ALGORITHMS ,BREAST exams ,ALGEBRA ,MEDICAL screening - Abstract
In this paper, we present a two-stage algorithm for mammogram registration, the geometrical alignment of mammogram sequences. The rationale behind this paper stems from the intrinsic difficulties in comparing mammogram sequences. Mammogram comparison is a valuable tool in national breast screening programs as well as in frequent monitoring and hormone replacement therapy (HRT). The method presented in this paper aims to improve mammogram comparison by estimating the underlying geometric transformation for any mammogram sequence. It takes into consideration the various temporal changes that may occur between successive scans of the same woman and is designed to overcome the inconsistencies of mammogram image formation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
33. Platelets: A Multiscale Approach for Recovering Edges and Surfaces in Photon-Limited Medical Imaging.
- Author
-
Willett, Rebecca M. and Nowak, Robert D.
- Subjects
DIAGNOSTIC imaging ,WAVELETS (Mathematics) ,ALGORITHMS - Abstract
The nonparametric multiscale platelet algorithms presented in this paper, unlike traditional wavelet-based methods, are both well suited to photon-limited medical imaging applications involving Poisson data and capable of better approximating edge contours. This paper introduces platelets, localized functions at various scales, locations, and orientations that produce piecewise linear image approximations, and a new multiscale image decomposition based on these functions. Platelets are well suited for approximating images consisting of smooth regions separated by smooth boundaries. For smoothness measured in certain Hölder classes, it is shown that the error of m-term platelet approximations can decay significantly faster than that of m-term approximations in terms of sinusoids, wavelets, or wedgelets. This suggests that platelets may outperform existing techniques for image denoising and reconstruction. Fast, platelet-based, maximum penalized likelihood methods for photon-limited image denoising, deblurring and tomographic reconstruction problems are developed. Because platelet decompositions of Poisson distributed images are tractable and computationally efficient, existing image reconstruction methods based on expectation-maximization type algorithms can be easily enhanced with platelet techniques. Experimental results suggest that platelet-based methods can outperform standard reconstruction methods currently in use in confocal microscopy, image restoration, and emission tomography. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
34. Domain Adaptation for Microscopy Imaging.
- Author
-
Becker, Carlos, Christoudias, C. Mario, and Fua, Pascal
- Subjects
NEURONS -- Ultrastructure ,BRAIN imaging ,ELECTRON microscopy ,IMAGE quality analysis ,MACHINE learning ,ANNOTATIONS ,ALGORITHMS - Abstract
Electron and light microscopy imaging can now deliver high-quality image stacks of neural structures. However, the amount of human annotation effort required to analyze them remains a major bottleneck. While machine learning algorithms can be used to help automate this process, they require training data, which is time-consuming to obtain manually, especially in image stacks. Furthermore, due to changing experimental conditions, successive stacks often exhibit differences that are severe enough to make it difficult to use a classifier trained for a specific one on another. This means that this tedious annotation process has to be repeated for each new stack. In this paper, we present a domain adaptation algorithm that addresses this issue by effectively leveraging labeled examples across different acquisitions and significantly reducing the annotation requirements. Our approach can handle complex, nonlinear image feature transformations and scales to large microscopy datasets that often involve high-dimensional feature spaces and large 3D data volumes. We evaluate our approach on four challenging electron and light microscopy applications that exhibit very different image modalities and where annotation is very costly. Across all applications we achieve a significant improvement over the state-of-the-art machine learning methods and demonstrate our ability to greatly reduce human annotation effort. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
35. The Delay Multiply and Sum Beamforming Algorithm in Ultrasound B-Mode Medical Imaging.
- Author
-
Matrone, Giulia, Savoia, Alessandro Stuart, Caliano, Giosue, and Magenes, Giovanni
- Subjects
BREAST cancer diagnosis ,BEAMFORMING ,ULTRASONIC imaging ,COMPUTATIONAL complexity ,ALGORITHMS - Abstract
Most of ultrasound medical imaging systems currently on the market implement standard Delay and Sum (DAS) beamforming to form B-mode images. However, image resolution and contrast achievable with DAS are limited by the aperture size and by the operating frequency. For this reason, different beamformers have been presented in the literature that are mainly based on adaptive algorithms, which allow achieving higher performance at the cost of an increased computational complexity. In this paper, we propose the use of an alternative nonlinear beamforming algorithm for medical ultrasound imaging, which is called Delay Multiply and Sum (DMAS) and that was originally conceived for a RADAR microwave system for breast cancer detection. We modify the DMAS beamformer and test its performance on both simulated and experimentally collected linear-scan data, by comparing the Point Spread Functions, beampatterns, synthetic phantom and in vivo carotid artery images obtained with standard DAS and with the proposed algorithm. Results show that the DMAS beamformer outperforms DAS in both simulated and experimental trials and that the main improvement brought about by this new method is a significantly higher contrast resolution (i.e., narrower main lobe and lower side lobes), which turns out into an increased dynamic range and better quality of B-mode images. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
36. Spectral CT Modeling and Reconstruction With Hybrid Detectors in Dynamic-Threshold-Based Counting and Integrating Modes.
- Author
-
Li, Liang, Chen, Zhiqiang, Cong, Wenxiang, and Wang, Ge
- Subjects
DIAGNOSTIC imaging ,PHOTONICS ,IMAGE reconstruction ,COMPUTED tomography ,BIOENERGETICS ,ALGORITHMS - Abstract
Spectral CT with photon counting detectors can significantly improve CT performance by reducing image noise and dose, increasing contrast resolution and material specificity, as well as enabling functional and molecular imaging with existing and emerging probes. However, the current photon counting detector architecture is difficult to balance the number of energy bins and the statistical noise in each energy bin. Moreover, the hardware support for multi-energy bins demands a complex circuit which is expensive. In this paper, we promote a new scheme known as hybrid detectors that combine the dynamic-threshold-based counting and integrating modes. In this scheme, an energy threshold can be dynamically changed during a spectral CT scan, which can be considered as compressive sensing along the spectral dimension. By doing so, the number of energy bins can be retrospectively specified, even in a spatially varying fashion. To establish the feasibility and merits of such hybrid detectors, we develop a tensor-based PRISM algorithm to reconstruct a spectral CT image from dynamic dual-energy data, and perform experiments with simulated and real data, producing very promising results. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
37. Automated Segmentation of Breast in 3-D MR Images Using a Robust Atlas.
- Author
-
Khalvati, Farzad, Gallego-Ortiz, Cristina, Balasingham, Sharmila, and Martel, Anne L.
- Subjects
BREAST imaging ,MAGNETIC resonance imaging ,ALGORITHMS ,IMAGE registration ,IMAGE segmentation ,PROBABILITY theory - Abstract
This paper presents a robust atlas-based segmentation (ABS) algorithm for segmentation of the breast boundary in 3-D MR images. The proposed algorithm combines the well-known methodologies of ABS namely probabilistic atlas and atlas selection approaches into a single framework where two configurations are realized. The algorithm uses phase congruency maps to create an atlas which is robust to intensity variations. This allows an atlas derived from images acquired with one MR imaging sequence to be used to segment images acquired with a different MR imaging sequence and eliminates the need for intensity-based registration. Images acquired using a Dixon sequence were used to create an atlas which was used to segment both Dixon images (intra-sequence) and T1-weighted images (inter-sequence). In both cases, highly accurate results were achieved with the median Dice similarity coefficient values of 94\% \pm 4\% and 87 \pm 6.5\%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
38. Artifact Suppressed Dictionary Learning for Low-Dose CT Image Processing.
- Author
-
Chen, Yang, Shi, Luyao, Feng, Qianjing, Yang, Jian, Shu, Huazhong, Luo, Limin, Coatrieux, Jean-Louis, and Chen, Wufan
- Subjects
IMAGE processing ,COMPUTED tomography ,TISSUE analysis ,ALGORITHMS ,IMAGE reconstruction ,FEATURE extraction - Abstract
Low-dose computed tomography (LDCT) images are often severely degraded by amplified mottle noise and streak artifacts. These artifacts are often hard to suppress without introducing tissue blurring effects. In this paper, we propose to process LDCT images using a novel image-domain algorithm called “artifact suppressed dictionary learning (ASDL).” In this ASDL method, orientation and scale information on artifacts is exploited to train artifact atoms, which are then combined with tissue feature atoms to build three discriminative dictionaries. The streak artifacts are cancelled via a discriminative sparse representation operation based on these dictionaries. Then, a general dictionary learning processing is applied to further reduce the noise and residual artifacts. Qualitative and quantitative evaluations on a large set of abdominal and mediastinum CT images are carried out and the results show that the proposed method can be efficiently applied in most current CT systems. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
39. Three-Dimensional Sheaf of Ultrasound Planes Reconstruction (SOUPR) of Ablated Volumes.
- Author
-
Ingle, Atul and Varghese, Tomy
- Subjects
ULTRASONIC imaging ,IMAGE reconstruction ,COMPUTER simulation ,SHEAR waves ,ABLATION techniques ,ALGORITHMS - Abstract
This paper presents an algorithm for 3-D reconstruction of tumor ablations using ultrasound shear wave imaging with electrode vibration elastography. Radio-frequency ultrasound data frames are acquired over imaging planes that form a subset of a sheaf of planes sharing a common axis of intersection. Shear wave velocity is estimated separately on each imaging plane using a piecewise linear function fitting technique with a fast optimization routine. An interpolation algorithm then computes velocity maps on a fine grid over a set of C-planes that are perpendicular to the axis of the sheaf. A full 3-D rendering of the ablation can then be created from this stack of C-planes; hence the name “Sheaf Of Ultrasound Planes Reconstruction” or SOUPR. The algorithm is evaluated through numerical simulations and also using data acquired from a tissue mimicking phantom. Reconstruction quality is gauged using contrast and contrast-to-noise ratio measurements and changes in quality from using increasing number of planes in the sheaf are quantified. The highest contrast of 5~dB is seen between the stiffest and softest regions of the phantom. Under certain idealizing assumptions on the true shape of the ablation, good reconstruction quality while maintaining fast processing rate can be obtained with as few as six imaging planes suggesting that the method is suited for parsimonious data acquisitions with very few sparsely chosen imaging planes. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
40. Multiple Importance Sampling for PET.
- Author
-
Szirmay-Kalos, Laszlo, Magdics, Milan, and Toth, Balazs
- Subjects
POSITRON emission tomography ,GRAPHICS processing units ,SAMPLING methods ,ALGORITHMS ,PHOTONICS ,MONTE Carlo method ,POSITRONS - Abstract
This paper proposes the application of multiple importance sampling in fully 3-D positron emission tomography to speed up the iterative reconstruction process. The proposed method combines the results of lines of responses (LOR) driven and voxel driven projections keeping their advantages, like importance sampling, performance and parallel execution on graphics processing units. Voxel driven methods can focus on point like features while LOR driven approaches are efficient in reconstructing homogeneous regions. The theoretical basis of the combination is the application of the mixture of the samples generated by the individual importance sampling methods, emphasizing a particular method where it is better than others. The proposed algorithms are built into the Tera-tomo system. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
41. Shape Representation for Efficient Landmark-Based Segmentation in 3-D.
- Author
-
Ibragimov, Bulat, Likar, Bostjan, Pernus, Franjo, and Vrtovec, Tomaz
- Subjects
IMAGE segmentation ,GAME theory ,ALGORITHMS ,COMPUTED tomography ,LUMBAR vertebrae ,TRANSPORTATION ,ELECTRICAL engineering - Abstract
In this paper, we propose a novel approach to landmark-based shape representation that is based on transportation theory, where landmarks are considered as sources and destinations, all possible landmark connections as roads, and established landmark connections as goods transported via these roads. Landmark connections, which are selectively established, are identified through their statistical properties describing the shape of the object of interest, and indicate the least costly roads for transporting goods from sources to destinations. From such a perspective, we introduce three novel shape representations that are combined with an existing landmark detection algorithm based on game theory. To reduce computational complexity, which results from the extension from 2-D to 3-D segmentation, landmark detection is augmented by a concept known in game theory as strategy dominance. The novel shape representations, game-theoretic landmark detection and strategy dominance are combined into a segmentation framework that was evaluated on 3-D computed tomography images of lumbar vertebrae and femoral heads. The best shape representation yielded symmetric surface distance of 0.75 mm and 1.11 mm, and Dice coefficient of 93.6% and 96.2% for lumbar vertebrae and femoral heads, respectively. By applying strategy dominance, the computational costs were further reduced for up to three times. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
42. Total Variation-Stokes Strategy for Sparse-View X-ray CT Image Reconstruction.
- Author
-
Liu, Yan, Liang, Zhengrong, Ma, Jianhua, Lu, Hongbing, Wang, Ke, Zhang, Hao, and Moore, William
- Subjects
X-ray imaging ,COMPUTED tomography ,CONVEX sets ,ALGORITHMS ,QUALITATIVE research ,IMAGE reconstruction ,IMAGE processing - Abstract
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and/or other constraints, a piecewise-smooth X-ray computed tomography image can be reconstructed from sparse-view projection data. However, due to the piecewise constant assumption for the TV model, the reconstructed images are frequently reported to suffer from the blocky or patchy artifacts. To eliminate this drawback, we present a total variation-stokes-projection onto convex sets (TVS-POCS) reconstruction method in this paper. The TVS model is derived by introducing isophote directions for the purpose of recovering possible missing information in the sparse-view data situation. Thus the desired consistencies along both the normal and the tangent directions are preserved in the resulting images. Compared to the previous TV-based image reconstruction algorithms, the preserved consistencies by the TVS-POCS method are expected to generate noticeable gains in terms of eliminating the patchy artifacts and preserving subtle structures. To evaluate the presented TVS-POCS method, both qualitative and quantitative studies were performed using digital phantom, physical phantom and clinical data experiments. The results reveal that the presented method can yield images with several noticeable gains, measured by the universal quality index and the full-width-at-half-maximum merit, as compared to its corresponding TV-based algorithms. In addition, the results further indicate that the TVS-POCS method approaches to the gold standard result of the filtered back-projection reconstruction in the full-view data case as theoretically expected, while most previous iterative methods may fail in the full-view case because of their artificial textures in the results. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
43. Comments on: A Methodology for Evaluation of Boundary Detection Algorithms on Medical Images.
- Author
-
Aiberola-López, Carlos, Martin-Fernández, Marcos, and Ruiz-Alzola, Juan
- Subjects
MEDICAL imaging systems ,ALGORITHMS ,HYPOTHESIS ,DIAGNOSTIC imaging ,CONFIDENCE intervals ,STATISTICAL tolerance regions - Abstract
In this paper we analyze a result previously published about a comparison between two statistical tests used for evaluation of boundary detection algorithms on medical images. We conclude that the statement made by Chalana and Kim (1997) about the performance of the percentage test has a weak theoretical foundation, and according to our results, is not correct. In addition, we propose a one-sided hypothesis test for which the acceptance region can be determined in advance, as opposed to the two- sided confidence intervals proposed in the original paper, which change according to the estimated quantity. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
44. Comments on "A New Algorithm for Border Description of Polarized Light Surface Microscopic Images of Pigmented Skin Lesions".
- Author
-
Burroni, Marco, Alparone, Luciano, and Argenti, Fabrizio
- Subjects
ALGORITHMS ,SKIN ,HUMAN anatomy ,MICROSCOPY ,PIGMENTS - Abstract
In this paper, discrepancies and reference inaccuracies in the paper (Grana et al., 2003) are pointed out. Specifically, it is demonstrated that the definitions of "lesion gradient" and "skin lesion gradient," widely used in a number of medical papers on computer analysis of pigmented skin lesions, are unambiguous, and that the "new algorithm for border description" described in the subject paper substantially relies on well-established concepts dating back over one decade ago. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
45. Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network.
- Author
-
Kang, Eunhee, Chang, Won, Yoo, Jaejun, and Ye, Jong Chul
- Subjects
COMPUTED tomography ,ARTIFICIAL neural networks ,ALGORITHMS ,WAVELET transforms ,DEEP learning - Abstract
Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. LEARN: Learned Experts’ Assessment-Based Reconstruction Network for Sparse-Data CT.
- Author
-
Chen, Hu, Zhang, Yi, Chen, Yunjin, Zhang, Junfeng, Zhang, Weihua, Sun, Huaiqiang, Lv, Yang, Liao, Peixi, Zhou, Jiliu, and Wang, Ge
- Subjects
COMPRESSED sensing ,COMPUTED tomography ,TOMOSYNTHESIS ,MACHINE learning ,ALGORITHMS - Abstract
Compressive sensing (CS) has proved effective for tomographic reconstruction from sparsely collected data or under-sampled measurements, which are practically important for few-view computed tomography (CT), tomosynthesis, interior tomography, and so on. To perform sparse-data CT, the iterative reconstruction commonly uses regularizers in the CS framework. Currently, how to choose the parameters adaptively for regularization is a major open problem. In this paper, inspired by the idea of machine learning especially deep learning, we unfold the state-of-the-art “fields of experts”-based iterative reconstruction scheme up to a number of iterations for data-driven training, construct a learned experts’ assessment-based reconstruction network (LEARN) for sparse-data CT, and demonstrate the feasibility and merits of our LEARN network. The experimental results with our proposed LEARN network produces a superior performance with the well-known Mayo Clinic low-dose challenge data set relative to the several state-of-the-art methods, in terms of artifact reduction, feature preservation, and computational speed. This is consistent to our insight that because all the regularization terms and parameters used in the iterative reconstruction are now learned from the training data, our LEARN network utilizes application-oriented knowledge more effectively and recovers underlying images more favorably than competing algorithms. Also, the number of layers in the LEARN network is only 50, reducing the computational complexity of typical iterative algorithms by orders of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
47. A Dictionary Learning Approach for Poisson Image Deblurring.
- Author
-
Ma, Liyan, Moisan, Lionel, Yu, Jian, and Zeng, Tieyong
- Subjects
LEARNING ,MATHEMATICAL optimization ,MATHEMATICAL regularization ,SIGNAL-to-noise ratio ,STATISTICS ,PROBLEM solving ,ALGORITHMS - Abstract
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
48. Vessel Tractography Using an Intensity Based Tensor Model With Branch Detection.
- Author
-
Cetin, Suheyla, Demir, Ali, Yezzi, Anthony, Degertekin, Muzaffer, and Unal, Gozde
- Subjects
DIAGNOSTIC imaging ,MATHEMATICAL models ,IMAGE segmentation ,CORONARY disease ,DIAGNOSIS ,DIFFUSION tensor imaging ,ALGORITHMS ,TOMOGRAPHY ,ARTERIOGRAPHY - Abstract
In this paper, we present a tubular structure segmentation method that utilizes a second order tensor constructed from directional intensity measurements, which is inspired from diffusion tensor image (DTI) modeling. The constructed anisotropic tensor which is fit inside a vessel drives the segmentation analogously to a tractography approach in DTI. Our model is initialized at a single seed point and is capable of capturing whole vessel trees by an automatic branch detection algorithm developed in the same framework. The centerline of the vessel as well as its thickness is extracted. Performance results within the Rotterdam Coronary Artery Algorithm Evaluation framework are provided for comparison with existing techniques. 96.4% average overlap with ground truth delineated by experts is obtained in addition to other measures reported in the paper. Moreover, we demonstrate further quantitative results over synthetic vascular datasets, and we provide quantitative experiments for branch detection on patient computed tomography angiography (CTA) volumes, as well as qualitative evaluations on the same CTA datasets, from visual scores by a cardiologist expert. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
49. Image Reconstruction From Highly Undersampled ( \bf k, t)-Space Data With Joint Partial Separability and Sparsity Constraints.
- Author
-
Zhao, Bo, Haldar, Justin P., Christodoulou, Anthony G., and Liang, Zhi-Pei
- Subjects
IMAGE reconstruction ,COMPUTATIONAL biology ,ALGORITHMS ,CARDIAC magnetic resonance imaging ,MEDICAL databases ,COMPUTER simulation ,SPATIOTEMPORAL processes - Abstract
Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled ( \bf k,t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
50. Time-Resolved Interventional Cardiac C-arm Cone-Beam CT: An Application of the PICCS Algorithm.
- Author
-
Chen, Guang-Hong, Theriault-Lauzier, Pascal, Tang, Jie, Nett, Brian, Leng, Shuai, Zambelli, Joseph, Qi, Zhihua, Bevins, Nicholas, Raval, Amish, Reeder, Scott, and Rowley, Howard
- Subjects
CARDIOGRAPHIC tomography ,ALGORITHMS ,MEDICAL imaging systems ,COMPUTER simulation ,ANIMAL models in research ,HEART function tests ,IMAGE analysis - Abstract
Time-resolved cardiac imaging is particularly interesting in the interventional setting since it would provide both image guidance for accurate procedural planning and cardiac functional evaluations directly in the operating room. Imaging the heart in vivo using a slowly rotating C-arm system is extremely challenging due to the limitations of the data acquisition system and the high temporal resolution required to avoid motion artifacts. In this paper, a data acquisition scheme and an image reconstruction method are proposed to achieve time-resolved cardiac cone-beam computed tomography imaging with isotropic spatial resolution and high temporal resolution using a slowly rotating C-arm system. The data are acquired within 14 s using a single gantry rotation with a short scan angular range. The enabling image reconstruction method is the prior image constrained compressed sensing (PICCS) algorithm. The prior image is reconstructed from data acquired over all cardiac phases. Each cardiac phase is then reconstructed from the retrospectively gated cardiac data using the PICCS algorithm. To validate the method, several studies were performed. Both numerical simulations using a hybrid motion phantom with static background anatomy as well as physical phantom studies have been used to demonstrate that the proposed method enables accurate reconstruction of image objects with a high isotropic spatial resolution. A canine animal model scanned in vivo was used to further validate the method. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.