51 results on '"physics.med-ph"'
Search Results
2. A Review of Machine Learning Applications for the Proton Magnetic Resonance Spectroscopy Workflow
- Author
-
van de Sande, Dennis M.J., Merkofer, Julian P., Amirrajab, Sina, Veta, Mitko, van Sloun, Ruud J.G., Versluis, Maarten J., Jansen, Jacobus F.A., van den Brink, Johan S., Breeuwer, Marcel, van de Sande, Dennis M.J., Merkofer, Julian P., Amirrajab, Sina, Veta, Mitko, van Sloun, Ruud J.G., Versluis, Maarten J., Jansen, Jacobus F.A., van den Brink, Johan S., and Breeuwer, Marcel
- Abstract
This literature review presents a comprehensive overview of machine learning (ML) applications in proton magnetic resonance spectroscopy (MRS). As the use of ML techniques in MRS continues to grow, this review aims to provide the MRS community with a structured overview of the state-of-the-art methods. Specifically, we examine and summarize studies published between 2017 and 2023 from major journals in the magnetic resonance field. We categorize these studies based on a typical MRS workflow, including data acquisition, processing, analysis, and artificial data generation. Our review reveals that ML in MRS is still in its early stages, with a primary focus on processing and analysis techniques, and less attention given to data acquisition. We also found that many studies use similar model architectures, with little comparison to alternative architectures. Additionally, the generation of artificial data is a crucial topic, with no consistent method for its generation. Furthermore, many studies demonstrate that artificial data suffers from generalization issues when tested on in-vivo data. We also conclude that risks related to ML models should be addressed, particularly for clinical applications. Therefore, output uncertainty measures and model biases are critical to investigate. Nonetheless, the rapid development of ML in MRS and the promising results from the reviewed studies justify further research in this field.
- Published
- 2023
3. AAPM DL-Sparse-View CT Challenge Submission Report: Designing an Iterative Network for Fanbeam-CT with Unknown Geometry
- Author
-
Genzel, Martin, Macdonald, Jan, März, Maximilian, Genzel, Martin, Macdonald, Jan, and März, Maximilian
- Abstract
This report is dedicated to a short motivation and description of our contribution to the AAPM DL-Sparse-View CT Challenge (team name: "robust-and-stable"). The task is to recover breast model phantom images from limited view fanbeam measurements using data-driven reconstruction techniques. The challenge is distinctive in the sense that participants are provided with a collection of ground truth images and their noiseless, subsampled sinograms (as well as the associated limited view filtered backprojection images), but not with the actual forward model. Therefore, our approach first estimates the fanbeam geometry in a data-driven geometric calibration step. In a subsequent two-step procedure, we design an iterative end-to-end network that enables the computation of near-exact solutions.
- Published
- 2021
4. Deep learning can accelerate and quantify simulated localized correlated spectroscopy
- Author
-
Iqbal, Zohaib, Iqbal, Zohaib, Dan, Nguyen, Thomas, Michael Albert, Jiang, Steve, Iqbal, Zohaib, Iqbal, Zohaib, Dan, Nguyen, Thomas, Michael Albert, and Jiang, Steve
- Abstract
Nuclear magnetic resonance spectroscopy (MRS) allows for the determination of atomic structures and concentrations of different chemicals in a biochemical sample of interest. MRS is used in vivo clinically to aid in the diagnosis of several pathologies that affect metabolic pathways in the body. Typically, this experiment produces a one dimensional (1D) 1H spectrum containing several peaks that are well associated with biochemicals, or metabolites. However, since many of these peaks overlap, distinguishing chemicals with similar atomic structures becomes much more challenging. One technique capable of overcoming this issue is the localized correlated spectroscopy (L-COSY) experiment, which acquires a second spectral dimension and spreads overlapping signal across this second dimension. Unfortunately, the acquisition of a two dimensional (2D) spectroscopy experiment is extremely time consuming. Furthermore, quantitation of a 2D spectrum is more complex. Recently, artificial intelligence has emerged in the field of medicine as a powerful force capable of diagnosing disease, aiding in treatment, and even predicting treatment outcome. In this study, we utilize deep learning to: 1) accelerate the L-COSY experiment and 2) quantify L-COSY spectra. We demonstrate that our deep learning model greatly outperforms compressed sensing based reconstruction of L-COSY spectra at higher acceleration factors. Specifically, at four-fold acceleration, our method has less than 5% normalized mean squared error, whereas compressed sensing yields 20% normalized mean squared error. We also show that at low SNR (25% noise compared to maximum signal), our deep learning model has less than 8% normalized mean squared error for quantitation of L-COSY spectra. These pilot simulation results appear promising and may help improve the efficiency and accuracy of L-COSY experiments in the future.
- Published
- 2021
5. Multi-Threshold Attention U-Net (MTAU) based Model for Multimodal Brain Tumor Segmentation in MRI scans
- Author
-
Awasthi, Navchetan, Pardasani, Rohit, Gupta, Swati, Awasthi, Navchetan, Pardasani, Rohit, and Gupta, Swati
- Abstract
Gliomas are one of the most frequent brain tumors and are classified into high grade and low grade gliomas. The segmentation of various regions such as tumor core, enhancing tumor etc. plays an important role in determining severity and prognosis. Here, we have developed a multi-threshold model based on attention U-Net for identification of various regions of the tumor in magnetic resonance imaging (MRI). We propose a multi-path segmentation and built three separate models for the different regions of interest. The proposed model achieved mean Dice Coefficient of 0.59, 0.72, and 0.61 for enhancing tumor, whole tumor and tumor core respectively on the training dataset. The same model gave mean Dice Coefficient of 0.57, 0.73, and 0.61 on the validation dataset and 0.59, 0.72, and 0.57 on the test dataset.
- Published
- 2021
6. Deep learning can accelerate and quantify simulated localized correlated spectroscopy
- Author
-
Iqbal, Zohaib, Iqbal, Zohaib, Nguyen, Dan, Thomas, Michael Albert, Jiang, Steve, Iqbal, Zohaib, Iqbal, Zohaib, Nguyen, Dan, Thomas, Michael Albert, and Jiang, Steve
- Abstract
Nuclear magnetic resonance spectroscopy (MRS) allows for the determination of atomic structures and concentrations of different chemicals in a biochemical sample of interest. MRS is used in vivo clinically to aid in the diagnosis of several pathologies that affect metabolic pathways in the body. Typically, this experiment produces a one dimensional (1D) 1H spectrum containing several peaks that are well associated with biochemicals, or metabolites. However, since many of these peaks overlap, distinguishing chemicals with similar atomic structures becomes much more challenging. One technique capable of overcoming this issue is the localized correlated spectroscopy (L-COSY) experiment, which acquires a second spectral dimension and spreads overlapping signal across this second dimension. Unfortunately, the acquisition of a two dimensional (2D) spectroscopy experiment is extremely time consuming. Furthermore, quantitation of a 2D spectrum is more complex. Recently, artificial intelligence has emerged in the field of medicine as a powerful force capable of diagnosing disease, aiding in treatment, and even predicting treatment outcome. In this study, we utilize deep learning to: 1) accelerate the L-COSY experiment and 2) quantify L-COSY spectra. We demonstrate that our deep learning model greatly outperforms compressed sensing based reconstruction of L-COSY spectra at higher acceleration factors. Specifically, at four-fold acceleration, our method has less than 5% normalized mean squared error, whereas compressed sensing yields 20% normalized mean squared error. We also show that at low SNR (25% noise compared to maximum signal), our deep learning model has less than 8% normalized mean squared error for quantitation of L-COSY spectra. These pilot simulation results appear promising and may help improve the efficiency and accuracy of L-COSY experiments in the future.
- Published
- 2021
7. Multi-Threshold Attention U-Net (MTAU) based Model for Multimodal Brain Tumor Segmentation in MRI scans
- Author
-
Awasthi, Navchetan, Pardasani, Rohit, Gupta, Swati, Awasthi, Navchetan, Pardasani, Rohit, and Gupta, Swati
- Abstract
Gliomas are one of the most frequent brain tumors and are classified into high grade and low grade gliomas. The segmentation of various regions such as tumor core, enhancing tumor etc. plays an important role in determining severity and prognosis. Here, we have developed a multi-threshold model based on attention U-Net for identification of various regions of the tumor in magnetic resonance imaging (MRI). We propose a multi-path segmentation and built three separate models for the different regions of interest. The proposed model achieved mean Dice Coefficient of 0.59, 0.72, and 0.61 for enhancing tumor, whole tumor and tumor core respectively on the training dataset. The same model gave mean Dice Coefficient of 0.57, 0.73, and 0.61 on the validation dataset and 0.59, 0.72, and 0.57 on the test dataset.
- Published
- 2021
8. A metabolite-specific 3D stack-of-spiral bSSFP sequence for improved lactate imaging in hyperpolarized [1-13 C]pyruvate studies on a 3T clinical scanner.
- Author
-
Tang, Shuyu, Tang, Shuyu, Bok, Robert, Qin, Hecong, Reed, Galen, VanCriekinge, Mark, Delos Santos, Romelyn, Overall, William, Santos, Juan, Gordon, Jeremy, Wang, Zhen Jane, Vigneron, Daniel B, Larson, Peder EZ, Tang, Shuyu, Tang, Shuyu, Bok, Robert, Qin, Hecong, Reed, Galen, VanCriekinge, Mark, Delos Santos, Romelyn, Overall, William, Santos, Juan, Gordon, Jeremy, Wang, Zhen Jane, Vigneron, Daniel B, and Larson, Peder EZ
- Abstract
PurposeThe balanced steady-state free precession sequence has been previously explored to improve the efficient use of nonrecoverable hyperpolarized 13C magnetization, but suffers from poor spectral selectivity and long acquisition time. The purpose of this study was to develop a novel metabolite-specific 3D bSSFP ("MS-3DSSFP") sequence with stack-of-spiral readouts for improved lactate imaging in hyperpolarized [1-13 C]pyruvate studies on a clinical 3T scanner.MethodsSimulations were performed to evaluate the spectral response of the MS-3DSSFP sequence. Thermal 13C phantom experiments were performed to validate the MS-3DSSFP sequence. In vivo hyperpolarized [1-13 C], pyruvate studies were performed to compare the MS-3DSSFP sequence with metabolite-specific gradient echo ("MS-GRE") sequences for lactate imaging.ResultsSimulations, phantom, and in vivo studies demonstrate that the MS-3DSSFP sequence achieved spectrally selective excitation on lactate while minimally perturbing other metabolites. Compared with MS-GRE sequences, the MS-3DSSFP sequence showed approximately a 2.5-fold SNR improvement for lactate imaging in rat kidneys, prostate tumors in a mouse model, and human kidneys.ConclusionsImproved lactate imaging using the MS-3DSSFP sequence in hyperpolarized [1-13 C]pyruvate studies was demonstrated in animals and humans. The MS-3DSSFP sequence could be applied for other clinical applications such as in the brain or adapted for imaging other metabolites such as pyruvate and bicarbonate.
- Published
- 2020
9. Measurement of 139La(p,x) cross sections from 35–60 MeV by stacked-target activation
- Author
-
Morrell, JT, Morrell, JT, Voyles, AS, Basunia, MS, Batchelder, JC, Matthews, EF, Bernstein, LA, Morrell, JT, Morrell, JT, Voyles, AS, Basunia, MS, Batchelder, JC, Matthews, EF, and Bernstein, LA
- Abstract
A stacked-target of natural lanthanum foils (99.9119% 139La) was irradiated using a 60 MeV proton beam at the LBNL 88-Inch Cyclotron. 139La(p,x) cross sections are reported between 35–60 MeV for nine product radionuclides. The primary motivation for this measurement was the need to quantify the production of 134Ce. As a positron-emitting analogue of the promising medical radionuclide 225Ac, 134Ce is desirable for in vivo applications of bio-distribution assays for this emerging radio-pharmaceutical. The results of this measurement were compared to the nuclear model codes TALYS, EMPIRE and ALICE (using default parameters), which showed significant deviation from the measured values.
- Published
- 2020
10. Automatic Hip Fracture Identification and Functional Subclassification with Deep Learning.
- Author
-
Krogue, Justin D, Krogue, Justin D, Cheng, Kaiyang V, Hwang, Kevin M, Toogood, Paul, Meinberg, Eric G, Geiger, Erik J, Zaid, Musa, McGill, Kevin C, Patel, Rina, Sohn, Jae Ho, Wright, Alexandra, Darger, Bryan F, Padrez, Kevin A, Ozhinsky, Eugene, Majumdar, Sharmila, Pedoia, Valentina, Krogue, Justin D, Krogue, Justin D, Cheng, Kaiyang V, Hwang, Kevin M, Toogood, Paul, Meinberg, Eric G, Geiger, Erik J, Zaid, Musa, McGill, Kevin C, Patel, Rina, Sohn, Jae Ho, Wright, Alexandra, Darger, Bryan F, Padrez, Kevin A, Ozhinsky, Eugene, Majumdar, Sharmila, and Pedoia, Valentina
- Abstract
PurposeTo investigate the feasibility of automatic identification and classification of hip fractures using deep learning, which may improve outcomes by reducing diagnostic errors and decreasing time to operation.Materials and methodsHip and pelvic radiographs from 1118 studies were reviewed, and 3026 hips were labeled via bounding boxes and classified as normal, displaced femoral neck fracture, nondisplaced femoral neck fracture, intertrochanteric fracture, previous open reduction and internal fixation, or previous arthroplasty. A deep learning-based object detection model was trained to automate the placement of the bounding boxes. A Densely Connected Convolutional Neural Network (or DenseNet) was trained on a subset of the bounding box images, and its performance was evaluated on a held-out test set and by comparison on a 100-image subset with two groups of human observers: fellowship-trained radiologists and orthopedists; senior residents in emergency medicine, radiology, and orthopedics.ResultsThe binary accuracy for detecting a fracture of this model was 93.7% (95% confidence interval [CI]: 90.8%, 96.5%), with a sensitivity of 93.2% (95% CI: 88.9%, 97.1%) and a specificity of 94.2% (95% CI: 89.7%, 98.4%). Multiclass classification accuracy was 90.8% (95% CI: 87.5%, 94.2%). When compared with the accuracy of human observers, the accuracy of the model achieved an expert-level classification, at the very least, under all conditions. Additionally, when the model was used as an aid, human performance improved, with aided resident performance approximating unaided fellowship-trained expert performance in the multiclass classification.ConclusionA deep learning model identified and classified hip fractures with expert-level performance, at the very least, and when used as an aid, improved human performance, with aided resident performance approximating that of unaided fellowship-trained attending physicians.Supplemental material is available for this article.© RSNA, 2
- Published
- 2020
11. A Deep Learning-Based Method for Automatic Segmentation of Proximal Femur from Quantitative Computed Tomography Images
- Author
-
Zhao, Chen, Zhao, Chen, Keyak, Joyce H, Tang, Jinshan, Kaneko, Tadashi S, Khosla, Sundeep, Amin, Shreyasee, Atkinson, Elizabeth J, Zhao, Lan-Juan, Serou, Michael J, Zhang, Chaoyang, Shen, Hui, Deng, Hong-Wen, Zhou, Weihua, Zhao, Chen, Zhao, Chen, Keyak, Joyce H, Tang, Jinshan, Kaneko, Tadashi S, Khosla, Sundeep, Amin, Shreyasee, Atkinson, Elizabeth J, Zhao, Lan-Juan, Serou, Michael J, Zhang, Chaoyang, Shen, Hui, Deng, Hong-Wen, and Zhou, Weihua
- Abstract
Purpose: Proximal femur image analyses based on quantitative computed tomography (QCT) provide a method to quantify the bone density and evaluate osteoporosis and risk of fracture. We aim to develop a deep-learning-based method for automatic proximal femur segmentation. Methods and Materials: We developed a 3D image segmentation method based on V-Net, an end-to-end fully convolutional neural network (CNN), to extract the proximal femur QCT images automatically. The proposed V-net methodology adopts a compound loss function, which includes a Dice loss and a L2 regularizer. We performed experiments to evaluate the effectiveness of the proposed segmentation method. In the experiments, a QCT dataset which included 397 QCT subjects was used. For the QCT image of each subject, the ground truth for the proximal femur was delineated by a well-trained scientist. During the experiments for the entire cohort then for male and female subjects separately, 90% of the subjects were used in 10-fold cross-validation for training and internal validation, and to select the optimal parameters of the proposed models; the rest of the subjects were used to evaluate the performance of models. Results: Visual comparison demonstrated high agreement between the model prediction and ground truth contours of the proximal femur portion of the QCT images. In the entire cohort, the proposed model achieved a Dice score of 0.9815, a sensitivity of 0.9852 and a specificity of 0.9992. In addition, an R2 score of 0.9956 (p<0.001) was obtained when comparing the volumes measured by our model prediction with the ground truth. Conclusion: This method shows a great promise for clinical application to QCT and QCT-based finite element analysis of the proximal femur for evaluating osteoporosis and hip fracture risk.
- Published
- 2020
12. Physics-informed neural networks for myocardial perfusion MRI quantification
- Author
-
van Herten, Rudolf L. M., Chiribiri, Amedeo, Breeuwer, Marcel, Veta, Mitko, Scannell, Cian M., van Herten, Rudolf L. M., Chiribiri, Amedeo, Breeuwer, Marcel, Veta, Mitko, and Scannell, Cian M.
- Abstract
Tracer-kinetic models allow for the quantification of kinetic parameters such as blood flow from dynamic contrast-enhanced magnetic resonance (MR) images. Fitting the observed data with multi-compartment exchange models is desirable, as they are physiologically plausible and resolve directly for blood flow and microvascular function. However, the reliability of model fitting is limited by the low signal-to-noise ratio, temporal resolution, and acquisition length. This may result in inaccurate parameter estimates. This study introduces physics-informed neural networks (PINNs) as a means to perform myocardial perfusion MR quantification, which provides a versatile scheme for the inference of kinetic parameters. These neural networks can be trained to fit the observed perfusion MR data while respecting the underlying physical conservation laws described by a multi-compartment exchange model. Here, we provide a framework for the implementation of PINNs in myocardial perfusion MR. The approach is validated both in silico and in vivo. In the in silico study, an overall reduction in mean-squared error with the ground-truth parameters was observed compared to a standard non-linear least squares fitting approach. The in vivo study demonstrates that the method produces parameter values comparable to those previously found in literature, as well as providing parameter maps which match the clinical diagnosis of patients.
- Published
- 2020
13. Measurement of 139La(p,x) cross sections from 35–60 MeV by stacked-target activation
- Author
-
Morrell, JT, Morrell, JT, Voyles, AS, Basunia, MS, Batchelder, JC, Matthews, EF, Bernstein, LA, Morrell, JT, Morrell, JT, Voyles, AS, Basunia, MS, Batchelder, JC, Matthews, EF, and Bernstein, LA
- Abstract
A stacked-target of natural lanthanum foils (99.9119% 139La) was irradiated using a 60 MeV proton beam at the LBNL 88-Inch Cyclotron. 139La(p,x) cross sections are reported between 35–60 MeV for nine product radionuclides. The primary motivation for this measurement was the need to quantify the production of 134Ce. As a positron-emitting analogue of the promising medical radionuclide 225Ac, 134Ce is desirable for in vivo applications of bio-distribution assays for this emerging radio-pharmaceutical. The results of this measurement were compared to the nuclear model codes TALYS, EMPIRE and ALICE (using default parameters), which showed significant deviation from the measured values.
- Published
- 2020
14. Automatic Hip Fracture Identification and Functional Subclassification with Deep Learning.
- Author
-
Krogue, Justin D, Krogue, Justin D, Cheng, Kaiyang V, Hwang, Kevin M, Toogood, Paul, Meinberg, Eric G, Geiger, Erik J, Zaid, Musa, McGill, Kevin C, Patel, Rina, Sohn, Jae Ho, Wright, Alexandra, Darger, Bryan F, Padrez, Kevin A, Ozhinsky, Eugene, Majumdar, Sharmila, Pedoia, Valentina, Krogue, Justin D, Krogue, Justin D, Cheng, Kaiyang V, Hwang, Kevin M, Toogood, Paul, Meinberg, Eric G, Geiger, Erik J, Zaid, Musa, McGill, Kevin C, Patel, Rina, Sohn, Jae Ho, Wright, Alexandra, Darger, Bryan F, Padrez, Kevin A, Ozhinsky, Eugene, Majumdar, Sharmila, and Pedoia, Valentina
- Abstract
PurposeTo investigate the feasibility of automatic identification and classification of hip fractures using deep learning, which may improve outcomes by reducing diagnostic errors and decreasing time to operation.Materials and methodsHip and pelvic radiographs from 1118 studies were reviewed, and 3026 hips were labeled via bounding boxes and classified as normal, displaced femoral neck fracture, nondisplaced femoral neck fracture, intertrochanteric fracture, previous open reduction and internal fixation, or previous arthroplasty. A deep learning-based object detection model was trained to automate the placement of the bounding boxes. A Densely Connected Convolutional Neural Network (or DenseNet) was trained on a subset of the bounding box images, and its performance was evaluated on a held-out test set and by comparison on a 100-image subset with two groups of human observers: fellowship-trained radiologists and orthopedists; senior residents in emergency medicine, radiology, and orthopedics.ResultsThe binary accuracy for detecting a fracture of this model was 93.7% (95% confidence interval [CI]: 90.8%, 96.5%), with a sensitivity of 93.2% (95% CI: 88.9%, 97.1%) and a specificity of 94.2% (95% CI: 89.7%, 98.4%). Multiclass classification accuracy was 90.8% (95% CI: 87.5%, 94.2%). When compared with the accuracy of human observers, the accuracy of the model achieved an expert-level classification, at the very least, under all conditions. Additionally, when the model was used as an aid, human performance improved, with aided resident performance approximating unaided fellowship-trained expert performance in the multiclass classification.ConclusionA deep learning model identified and classified hip fractures with expert-level performance, at the very least, and when used as an aid, improved human performance, with aided resident performance approximating that of unaided fellowship-trained attending physicians.Supplemental material is available for this article.© RSNA, 2
- Published
- 2020
15. A metabolite-specific 3D stack-of-spiral bSSFP sequence for improved lactate imaging in hyperpolarized [1-13 C]pyruvate studies on a 3T clinical scanner.
- Author
-
Tang, Shuyu, Tang, Shuyu, Bok, Robert, Qin, Hecong, Reed, Galen, VanCriekinge, Mark, Delos Santos, Romelyn, Overall, William, Santos, Juan, Gordon, Jeremy, Wang, Zhen Jane, Vigneron, Daniel B, Larson, Peder EZ, Tang, Shuyu, Tang, Shuyu, Bok, Robert, Qin, Hecong, Reed, Galen, VanCriekinge, Mark, Delos Santos, Romelyn, Overall, William, Santos, Juan, Gordon, Jeremy, Wang, Zhen Jane, Vigneron, Daniel B, and Larson, Peder EZ
- Abstract
PurposeThe balanced steady-state free precession sequence has been previously explored to improve the efficient use of nonrecoverable hyperpolarized 13C magnetization, but suffers from poor spectral selectivity and long acquisition time. The purpose of this study was to develop a novel metabolite-specific 3D bSSFP ("MS-3DSSFP") sequence with stack-of-spiral readouts for improved lactate imaging in hyperpolarized [1-13 C]pyruvate studies on a clinical 3T scanner.MethodsSimulations were performed to evaluate the spectral response of the MS-3DSSFP sequence. Thermal 13C phantom experiments were performed to validate the MS-3DSSFP sequence. In vivo hyperpolarized [1-13 C], pyruvate studies were performed to compare the MS-3DSSFP sequence with metabolite-specific gradient echo ("MS-GRE") sequences for lactate imaging.ResultsSimulations, phantom, and in vivo studies demonstrate that the MS-3DSSFP sequence achieved spectrally selective excitation on lactate while minimally perturbing other metabolites. Compared with MS-GRE sequences, the MS-3DSSFP sequence showed approximately a 2.5-fold SNR improvement for lactate imaging in rat kidneys, prostate tumors in a mouse model, and human kidneys.ConclusionsImproved lactate imaging using the MS-3DSSFP sequence in hyperpolarized [1-13 C]pyruvate studies was demonstrated in animals and humans. The MS-3DSSFP sequence could be applied for other clinical applications such as in the brain or adapted for imaging other metabolites such as pyruvate and bicarbonate.
- Published
- 2020
16. A Deep Learning-Based Method for Automatic Segmentation of Proximal Femur from Quantitative Computed Tomography Images
- Author
-
Zhao, Chen, Zhao, Chen, Keyak, Joyce H, Tang, Jinshan, Kaneko, Tadashi S, Khosla, Sundeep, Amin, Shreyasee, Atkinson, Elizabeth J, Zhao, Lan-Juan, Serou, Michael J, Zhang, Chaoyang, Shen, Hui, Deng, Hong-Wen, Zhou, Weihua, Zhao, Chen, Zhao, Chen, Keyak, Joyce H, Tang, Jinshan, Kaneko, Tadashi S, Khosla, Sundeep, Amin, Shreyasee, Atkinson, Elizabeth J, Zhao, Lan-Juan, Serou, Michael J, Zhang, Chaoyang, Shen, Hui, Deng, Hong-Wen, and Zhou, Weihua
- Abstract
Purpose: Proximal femur image analyses based on quantitative computed tomography (QCT) provide a method to quantify the bone density and evaluate osteoporosis and risk of fracture. We aim to develop a deep-learning-based method for automatic proximal femur segmentation. Methods and Materials: We developed a 3D image segmentation method based on V-Net, an end-to-end fully convolutional neural network (CNN), to extract the proximal femur QCT images automatically. The proposed V-net methodology adopts a compound loss function, which includes a Dice loss and a L2 regularizer. We performed experiments to evaluate the effectiveness of the proposed segmentation method. In the experiments, a QCT dataset which included 397 QCT subjects was used. For the QCT image of each subject, the ground truth for the proximal femur was delineated by a well-trained scientist. During the experiments for the entire cohort then for male and female subjects separately, 90% of the subjects were used in 10-fold cross-validation for training and internal validation, and to select the optimal parameters of the proposed models; the rest of the subjects were used to evaluate the performance of models. Results: Visual comparison demonstrated high agreement between the model prediction and ground truth contours of the proximal femur portion of the QCT images. In the entire cohort, the proposed model achieved a Dice score of 0.9815, a sensitivity of 0.9852 and a specificity of 0.9992. In addition, an R2 score of 0.9956 (p<0.001) was obtained when comparing the volumes measured by our model prediction with the ground truth. Conclusion: This method shows a great promise for clinical application to QCT and QCT-based finite element analysis of the proximal femur for evaluating osteoporosis and hip fracture risk.
- Published
- 2020
17. Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue.
- Author
-
Zhang, Yijie, Zhang, Yijie, de Haan, Kevin, Rivenson, Yair, Li, Jingxi, Delis, Apostolos, Ozcan, Aydogan, Zhang, Yijie, Zhang, Yijie, de Haan, Kevin, Rivenson, Yair, Li, Jingxi, Delis, Apostolos, and Ozcan, Aydogan
- Abstract
Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a "digital staining matrix", which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones' silver stain, and Masson's trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.
- Published
- 2020
18. Multiparametric Cardiac 18F-FDG PET in Humans: Kinetic Model Selection and Identifiability Analysis.
- Author
-
Zuo, Yang, Zuo, Yang, Badawi, Ramsey D, Foster, Cameron C, Smith, Thomas, López, Javier E, Wang, Guobao, Zuo, Yang, Zuo, Yang, Badawi, Ramsey D, Foster, Cameron C, Smith, Thomas, López, Javier E, and Wang, Guobao
- Abstract
Cardiac 18F-FDG PET has been used in clinics to assess myocardial glucose metabolism. Its ability for imaging myocardial glucose transport, however, has rarely been exploited in clinics. Using the dynamic FDG-PET scans of ten patients with coronary artery disease, we investigate in this paper appropriate dynamic scan and kinetic modeling protocols for efficient quantification of myocardial glucose transport. Three kinetic models and the effect of scan duration were evaluated by using statistical fit quality, assessing the impact on kinetic quantification, and analyzing the practical identifiability. The results show that the kinetic model selection depends on the scan duration. The reversible two-tissue model was needed for a one-hour dynamic scan. The irreversible two-tissue model was optimal for a scan duration of around 10-15 minutes. If the scan duration was shortened to 2-3 minutes, a one-tissue model was the most appropriate. For global quantification of myocardial glucose transport, we demonstrated that an early dynamic scan with a duration of 10-15 minutes and irreversible kinetic modeling was comparable to the full one-hour scan with reversible kinetic modeling. Myocardial glucose transport quantification provides an additional physiological parameter on top of the existing assessment of glucose metabolism and has the potential to enable single tracer multiparametric imaging in the myocardium.
- Published
- 2020
19. Physics-informed neural networks for myocardial perfusion MRI quantification
- Author
-
van Herten, Rudolf L. M., Chiribiri, Amedeo, Breeuwer, Marcel, Veta, Mitko, Scannell, Cian M., van Herten, Rudolf L. M., Chiribiri, Amedeo, Breeuwer, Marcel, Veta, Mitko, and Scannell, Cian M.
- Abstract
Tracer-kinetic models allow for the quantification of kinetic parameters such as blood flow from dynamic contrast-enhanced magnetic resonance (MR) images. Fitting the observed data with multi-compartment exchange models is desirable, as they are physiologically plausible and resolve directly for blood flow and microvascular function. However, the reliability of model fitting is limited by the low signal-to-noise ratio, temporal resolution, and acquisition length. This may result in inaccurate parameter estimates. This study introduces physics-informed neural networks (PINNs) as a means to perform myocardial perfusion MR quantification, which provides a versatile scheme for the inference of kinetic parameters. These neural networks can be trained to fit the observed perfusion MR data while respecting the underlying physical conservation laws described by a multi-compartment exchange model. Here, we provide a framework for the implementation of PINNs in myocardial perfusion MR. The approach is validated both in silico and in vivo. In the in silico study, an overall reduction in mean-squared error with the ground-truth parameters was observed compared to a standard non-linear least squares fitting approach. The in vivo study demonstrates that the method produces parameter values comparable to those previously found in literature, as well as providing parameter maps which match the clinical diagnosis of patients.
- Published
- 2020
20. A regional bolus tracking and real-time B1 calibration method for hyperpolarized 13 C MRI.
- Author
-
Tang, Shuyu, Tang, Shuyu, Milshteyn, Eugene, Reed, Galen, Gordon, Jeremy, Bok, Robert, Zhu, Xucheng, Zhu, Zihan, Vigneron, Daniel B, Larson, Peder EZ, Tang, Shuyu, Tang, Shuyu, Milshteyn, Eugene, Reed, Galen, Gordon, Jeremy, Bok, Robert, Zhu, Xucheng, Zhu, Zihan, Vigneron, Daniel B, and Larson, Peder EZ
- Abstract
PurposeAcquisition timing and B1 calibration are two key factors that affect the quality and accuracy of hyperpolarized 13 C MRI. The goal of this project was to develop a new approach using regional bolus tracking to trigger Bloch-Siegert B1 mapping and real-time B1 calibration based on regional B1 measurements, followed by dynamic imaging of hyperpolarized 13 C metabolites in vivo.MethodsThe proposed approach was implemented on a system which allows real-time data processing and real-time control on the sequence. Real-time center frequency calibration upon the bolus arrival was also added. The feasibility of applying the proposed framework for in vivo hyperpolarized 13 C imaging was tested on healthy rats, tumor-bearing mice and a healthy volunteer on a clinical 3T scanner following hyperpolarized [1-13 C]pyruvate injection. Multichannel receive coils were used in the human study.ResultsAutomatic acquisition timing based on either regional bolus peak or bolus arrival was achieved with the proposed framework. Reduced blurring artifacts in real-time reconstructed images were observed with real-time center frequency calibration. Real-time computed B1 scaling factors agreed with real-time acquired B1 maps. Flip angle correction using B1 maps results in a more consistent quantification of metabolic activity (i.e, pyruvate-to-lactate conversion, kPL ). Experiment recordings are provided to demonstrate the real-time actions during the experiment.ConclusionsThe proposed method was successfully demonstrated on animals and a human volunteer, and is anticipated to improve the efficient use of the hyperpolarized signal as well as the accuracy and robustness of hyperpolarized 13 C imaging.
- Published
- 2019
21. A GPU-based multi-criteria optimization algorithm for HDR brachytherapy.
- Author
-
Bélanger, Cédric, Bélanger, Cédric, Cui, Songye, Ma, Yunzhi, Després, Philippe, Adam M Cunha, J, Beaulieu, Luc, Bélanger, Cédric, Bélanger, Cédric, Cui, Songye, Ma, Yunzhi, Després, Philippe, Adam M Cunha, J, and Beaulieu, Luc
- Abstract
Currently in HDR brachytherapy planning, a manual fine-tuning of an objective function is necessary to obtain case-specific valid plans. This study intends to facilitate this process by proposing a patient-specific inverse planning algorithm for HDR prostate brachytherapy: GPU-based multi-criteria optimization (gMCO). Two GPU-based optimization engines including simulated annealing (gSA) and a quasi-Newton optimizer (gL-BFGS) were implemented to compute multiple plans in parallel. After evaluating the equivalence and the computation performance of these two optimization engines, one preferred optimization engine was selected for the gMCO algorithm. Five hundred sixty-two previously treated prostate HDR cases were divided into validation set (100) and test set (462). In the validation set, the number of Pareto optimal plans to achieve the best plan quality was determined for the gMCO algorithm. In the test set, gMCO plans were compared with the physician-approved clinical plans. Our results indicated that the optimization process is equivalent between gL-BFGS and gSA, and that the computational performance of gL-BFGS is up to 67 times faster than gSA. Over 462 cases, the number of clinically valid plans was 428 (92.6%) for clinical plans and 461 (99.8%) for gMCO plans. The number of valid plans with target [Formula: see text] coverage greater than 95% was 288 (62.3%) for clinical plans and 414 (89.6%) for gMCO plans. The mean planning time was 9.4 s for the gMCO algorithm to generate 1000 Pareto optimal plans. In conclusion, gL-BFGS is able to compute thousands of SA equivalent treatment plans within a short time frame. Powered by gL-BFGS, an ultra-fast and robust multi-criteria optimization algorithm was implemented for HDR prostate brachytherapy. Plan pools with various trade-offs can be created with this algorithm. A large-scale comparison against physician approved clinical plans showed that treatment plan quality could be improved and planning time could be sign
- Published
- 2019
22. Hierarchical Bayesian myocardial perfusion quantification
- Author
-
Scannell, Cian M., Chiribiri, Amedeo, Villa, Adriana D. M., Breeuwer, Marcel, Lee, Jack, Scannell, Cian M., Chiribiri, Amedeo, Villa, Adriana D. M., Breeuwer, Marcel, and Lee, Jack
- Abstract
Purpose: Tracer-kinetic models can be used for the quantitative assessment of contrast-enhanced MRI data. However, the model-fitting can produce unreliable results due to the limited data acquired and the high noise levels. Such problems are especially prevalent in myocardial perfusion MRI leading to the compromise of constrained numerical deconvolutions and segmental signal averaging being commonly used as alternatives to the more complex tracer-kinetic models. Methods: In this work, the use of hierarchical Bayesian inference for the parameter estimation is explored. It is shown that with Bayesian inference it is possible to reliably fit the two-compartment exchange model to perfusion data. The use of prior knowledge on the ranges of kinetic parameters and the fact that neighbouring voxels are likely to have similar kinetic properties combined with a Markov chain Monte Carlo based fitting procedure significantly improves the reliability of the perfusion estimates with compared to the traditional least-squares approach. The method is assessed using both simulated and patient data. Results: The average (standard deviation) normalised mean square error for the distinct noise realisations of a simulation phantom falls from 0.32 (0.55) with the least-squares fitting to 0.13 (0.2) using Bayesian inference. The assessment of the presence of coronary artery disease based purely on the quantitative MBF maps obtained using Bayesian inference matches the visual assessment in all 24 slices. When using the maps obtained by the least-squares fitting, a corresponding assessment is only achieved in 16/24 slices. Conclusion: Bayesian inference allows a reliable, fully automated and user-independent assessment of myocardial perfusion on a voxel-wise level using the two-compartment exchange model.
- Published
- 2019
23. Deep learning-based prediction of kinetic parameters from myocardial perfusion MRI
- Author
-
Scannell, Cian M., Bosch, Piet van den, Chiribiri, Amedeo, Lee, Jack, Breeuwer, Marcel, Veta, Mitko, Scannell, Cian M., Bosch, Piet van den, Chiribiri, Amedeo, Lee, Jack, Breeuwer, Marcel, and Veta, Mitko
- Abstract
The quantification of myocardial perfusion MRI has the potential to provide a fast, automated and user-independent assessment of myocardial ischaemia. However, due to the relatively high noise level and low temporal resolution of the acquired data and the complexity of the tracer-kinetic models, the model fitting can yield unreliable parameter estimates. A solution to this problem is the use of Bayesian inference which can incorporate prior knowledge and improve the reliability of the parameter estimation. This, however, uses Markov chain Monte Carlo sampling to approximate the posterior distribution of the kinetic parameters which is extremely time intensive. This work proposes training convolutional networks to directly predict the kinetic parameters from the signal-intensity curves that are trained using estimates obtained from the Bayesian inference. This allows fast estimation of the kinetic parameters with a similar performance to the Bayesian inference.
- Published
- 2019
24. A GPU-based multi-criteria optimization algorithm for HDR brachytherapy.
- Author
-
Bélanger, Cédric, Bélanger, Cédric, Cui, Songye, Ma, Yunzhi, Després, Philippe, Adam M Cunha, J, Beaulieu, Luc, Bélanger, Cédric, Bélanger, Cédric, Cui, Songye, Ma, Yunzhi, Després, Philippe, Adam M Cunha, J, and Beaulieu, Luc
- Abstract
Currently in HDR brachytherapy planning, a manual fine-tuning of an objective function is necessary to obtain case-specific valid plans. This study intends to facilitate this process by proposing a patient-specific inverse planning algorithm for HDR prostate brachytherapy: GPU-based multi-criteria optimization (gMCO). Two GPU-based optimization engines including simulated annealing (gSA) and a quasi-Newton optimizer (gL-BFGS) were implemented to compute multiple plans in parallel. After evaluating the equivalence and the computation performance of these two optimization engines, one preferred optimization engine was selected for the gMCO algorithm. Five hundred sixty-two previously treated prostate HDR cases were divided into validation set (100) and test set (462). In the validation set, the number of Pareto optimal plans to achieve the best plan quality was determined for the gMCO algorithm. In the test set, gMCO plans were compared with the physician-approved clinical plans. Our results indicated that the optimization process is equivalent between gL-BFGS and gSA, and that the computational performance of gL-BFGS is up to 67 times faster than gSA. Over 462 cases, the number of clinically valid plans was 428 (92.6%) for clinical plans and 461 (99.8%) for gMCO plans. The number of valid plans with target [Formula: see text] coverage greater than 95% was 288 (62.3%) for clinical plans and 414 (89.6%) for gMCO plans. The mean planning time was 9.4 s for the gMCO algorithm to generate 1000 Pareto optimal plans. In conclusion, gL-BFGS is able to compute thousands of SA equivalent treatment plans within a short time frame. Powered by gL-BFGS, an ultra-fast and robust multi-criteria optimization algorithm was implemented for HDR prostate brachytherapy. Plan pools with various trade-offs can be created with this algorithm. A large-scale comparison against physician approved clinical plans showed that treatment plan quality could be improved and planning time could be sign
- Published
- 2019
25. A regional bolus tracking and real-time B1 calibration method for hyperpolarized 13 C MRI.
- Author
-
Tang, Shuyu, Tang, Shuyu, Milshteyn, Eugene, Reed, Galen, Gordon, Jeremy, Bok, Robert, Zhu, Xucheng, Zhu, Zihan, Vigneron, Daniel B, Larson, Peder EZ, Tang, Shuyu, Tang, Shuyu, Milshteyn, Eugene, Reed, Galen, Gordon, Jeremy, Bok, Robert, Zhu, Xucheng, Zhu, Zihan, Vigneron, Daniel B, and Larson, Peder EZ
- Abstract
PurposeAcquisition timing and B1 calibration are two key factors that affect the quality and accuracy of hyperpolarized 13 C MRI. The goal of this project was to develop a new approach using regional bolus tracking to trigger Bloch-Siegert B1 mapping and real-time B1 calibration based on regional B1 measurements, followed by dynamic imaging of hyperpolarized 13 C metabolites in vivo.MethodsThe proposed approach was implemented on a system which allows real-time data processing and real-time control on the sequence. Real-time center frequency calibration upon the bolus arrival was also added. The feasibility of applying the proposed framework for in vivo hyperpolarized 13 C imaging was tested on healthy rats, tumor-bearing mice and a healthy volunteer on a clinical 3T scanner following hyperpolarized [1-13 C]pyruvate injection. Multichannel receive coils were used in the human study.ResultsAutomatic acquisition timing based on either regional bolus peak or bolus arrival was achieved with the proposed framework. Reduced blurring artifacts in real-time reconstructed images were observed with real-time center frequency calibration. Real-time computed B1 scaling factors agreed with real-time acquired B1 maps. Flip angle correction using B1 maps results in a more consistent quantification of metabolic activity (i.e, pyruvate-to-lactate conversion, kPL ). Experiment recordings are provided to demonstrate the real-time actions during the experiment.ConclusionsThe proposed method was successfully demonstrated on animals and a human volunteer, and is anticipated to improve the efficient use of the hyperpolarized signal as well as the accuracy and robustness of hyperpolarized 13 C imaging.
- Published
- 2019
26. Deep learning-based prediction of kinetic parameters from myocardial perfusion MRI
- Author
-
Scannell, Cian M., Bosch, Piet van den, Chiribiri, Amedeo, Lee, Jack, Breeuwer, Marcel, Veta, Mitko, Scannell, Cian M., Bosch, Piet van den, Chiribiri, Amedeo, Lee, Jack, Breeuwer, Marcel, and Veta, Mitko
- Abstract
The quantification of myocardial perfusion MRI has the potential to provide a fast, automated and user-independent assessment of myocardial ischaemia. However, due to the relatively high noise level and low temporal resolution of the acquired data and the complexity of the tracer-kinetic models, the model fitting can yield unreliable parameter estimates. A solution to this problem is the use of Bayesian inference which can incorporate prior knowledge and improve the reliability of the parameter estimation. This, however, uses Markov chain Monte Carlo sampling to approximate the posterior distribution of the kinetic parameters which is extremely time intensive. This work proposes training convolutional networks to directly predict the kinetic parameters from the signal-intensity curves that are trained using estimates obtained from the Bayesian inference. This allows fast estimation of the kinetic parameters with a similar performance to the Bayesian inference.
- Published
- 2019
27. Hierarchical Bayesian myocardial perfusion quantification
- Author
-
Scannell, Cian M., Chiribiri, Amedeo, Villa, Adriana D. M., Breeuwer, Marcel, Lee, Jack, Scannell, Cian M., Chiribiri, Amedeo, Villa, Adriana D. M., Breeuwer, Marcel, and Lee, Jack
- Abstract
Purpose: Tracer-kinetic models can be used for the quantitative assessment of contrast-enhanced MRI data. However, the model-fitting can produce unreliable results due to the limited data acquired and the high noise levels. Such problems are especially prevalent in myocardial perfusion MRI leading to the compromise of constrained numerical deconvolutions and segmental signal averaging being commonly used as alternatives to the more complex tracer-kinetic models. Methods: In this work, the use of hierarchical Bayesian inference for the parameter estimation is explored. It is shown that with Bayesian inference it is possible to reliably fit the two-compartment exchange model to perfusion data. The use of prior knowledge on the ranges of kinetic parameters and the fact that neighbouring voxels are likely to have similar kinetic properties combined with a Markov chain Monte Carlo based fitting procedure significantly improves the reliability of the perfusion estimates with compared to the traditional least-squares approach. The method is assessed using both simulated and patient data. Results: The average (standard deviation) normalised mean square error for the distinct noise realisations of a simulation phantom falls from 0.32 (0.55) with the least-squares fitting to 0.13 (0.2) using Bayesian inference. The assessment of the presence of coronary artery disease based purely on the quantitative MBF maps obtained using Bayesian inference matches the visual assessment in all 24 slices. When using the maps obtained by the least-squares fitting, a corresponding assessment is only achieved in 16/24 slices. Conclusion: Bayesian inference allows a reliable, fully automated and user-independent assessment of myocardial perfusion on a voxel-wise level using the two-compartment exchange model.
- Published
- 2019
28. Area-preserving mapping of 3D ultrasound carotid artery images using density-equalizing reference map
- Author
-
Choi, Gary PT, Choi, Gary PT, Chiu, Bernard, Rycroft, Chris H, Choi, Gary PT, Choi, Gary PT, Chiu, Bernard, and Rycroft, Chris H
- Abstract
Carotid atherosclerosis is a focal disease at the bifurcations of the carotid artery. To quantitatively monitor the local changes in the vessel-wall-plus-plaque thickness (VWT) and compare the VWT distributions for different patients or for the same patients at different ultrasound scanning sessions, a mapping technique is required to adjust for the geometric variability of different carotid artery models. In this work, we propose a novel method called density-equalizing reference map (DERM) for mapping 3D carotid surfaces to a standardized 2D carotid template, with an emphasis on preserving the local geometry of the carotid surface by minimizing the local area distortion. The initial map was generated by a previously described arc-length scaling (ALS) mapping method, which projects a 3D carotid surface onto a 2D non-convex L-shaped domain. A smooth and area-preserving flattened map was subsequently constructed by deforming the ALS map using the proposed algorithm that combines the density-equalizing map and the reference map techniques. This combination allows, for the first time, one-to-one mapping from a 3D surface to a standardized non-convex planar domain in an area-preserving manner. Evaluations using 20 carotid surface models show that the proposed method reduced the area distortion of the flattening maps by over 80% as compared to the ALS mapping method.
- Published
- 2018
29. Area-preserving mapping of 3D ultrasound carotid artery images using density-equalizing reference map
- Author
-
Choi, Gary PT, Choi, Gary PT, Chiu, Bernard, Rycroft, Chris H, Choi, Gary PT, Choi, Gary PT, Chiu, Bernard, and Rycroft, Chris H
- Abstract
Carotid atherosclerosis is a focal disease at the bifurcations of the carotid artery. To quantitatively monitor the local changes in the vessel-wall-plus-plaque thickness (VWT) and compare the VWT distributions for different patients or for the same patients at different ultrasound scanning sessions, a mapping technique is required to adjust for the geometric variability of different carotid artery models. In this work, we propose a novel method called density-equalizing reference map (DERM) for mapping 3D carotid surfaces to a standardized 2D carotid template, with an emphasis on preserving the local geometry of the carotid surface by minimizing the local area distortion. The initial map was generated by a previously described arc-length scaling (ALS) mapping method, which projects a 3D carotid surface onto a 2D non-convex L-shaped domain. A smooth and area-preserving flattened map was subsequently constructed by deforming the ALS map using the proposed algorithm that combines the density-equalizing map and the reference map techniques. This combination allows, for the first time, one-to-one mapping from a 3D surface to a standardized non-convex planar domain in an area-preserving manner. Evaluations using 20 carotid surface models show that the proposed method reduced the area distortion of the flattening maps by over 80% as compared to the ALS mapping method.
- Published
- 2018
30. A Highly Accelerated Parallel Multi-GPU based Reconstruction Algorithm for Generating Accurate Relative Stopping Powers
- Author
-
Karbasi, Paniz, Karbasi, Paniz, Cai, Ritchie, Schultze, Blake, Nguyen, Hanh, Reed, Jones, Hall, Patrick, Giacometti, Valentina, Bashkirov, Vladimir, Johnson, Robert, Karonis, Nick, Olafsen, Jeffrey, Ordonez, Caesar, Schubert, Keith E, Schulte, Reinhard W, Karbasi, Paniz, Karbasi, Paniz, Cai, Ritchie, Schultze, Blake, Nguyen, Hanh, Reed, Jones, Hall, Patrick, Giacometti, Valentina, Bashkirov, Vladimir, Johnson, Robert, Karonis, Nick, Olafsen, Jeffrey, Ordonez, Caesar, Schubert, Keith E, and Schulte, Reinhard W
- Abstract
Low-dose Proton Computed Tomography (pCT) is an evolving imaging modality that is used in proton therapy planning which addresses the range uncertainty problem. The goal of pCT is generating a 3D map of Relative Stopping Power (RSP) measurements with high accuracy within clinically required time frames. Generating accurate RSP values within the shortest amount of time is considered a key goal when developing a pCT software. The existing pCT softwares have successfully met this time frame and even succeeded this time goal, but requiring clusters with hundreds of processors. This paper describes a novel reconstruction technique using two Graphics Processing Unit (GPU) cores, such as is available on a single Nvidia P100. The proposed reconstruction technique is tested on both simulated and experimental datasets and on two different systems namely Nvidia K40 and P100 GPUs from IBM and Cray. The experimental results demonstrate that our proposed reconstruction method meets both the timing and accuracy with the benefit of having reasonable cost, and efficient use of power.
- Published
- 2018
31. Fraction-variant beam orientation optimization for non-coplanar IMRT.
- Author
-
O'Connor, Daniel, O'Connor, Daniel, Yu, Victoria, Nguyen, Dan, Ruan, Dan, Sheng, Ke, O'Connor, Daniel, O'Connor, Daniel, Yu, Victoria, Nguyen, Dan, Ruan, Dan, and Sheng, Ke
- Abstract
Conventional beam orientation optimization (BOO) algorithms for IMRT assume that the same set of beam angles is used for all treatment fractions. In this paper we present a BOO formulation based on group sparsity that simultaneously optimizes non-coplanar beam angles for all fractions, yielding a fraction-variant (FV) treatment plan. Beam angles are selected by solving a multi-fraction fluence map optimization problem involving 500-700 candidate beams per fraction, with an additional group sparsity term that encourages most candidate beams to be inactive. The optimization problem is solved using the fast iterative shrinkage-thresholding algorithm. Our FV BOO algorithm is used to create five-fraction treatment plans for digital phantom, prostate, and lung cases as well as a 30-fraction plan for a head and neck case. A homogeneous PTV dose coverage is maintained in all fractions. The treatment plans are compared with fraction-invariant plans that use a fixed set of beam angles for all fractions. The FV plans reduced OAR mean dose and D 2 values on average by 3.3% and 3.8% of the prescription dose, respectively. Notably, mean OAR dose was reduced by 14.3% of prescription dose (rectum), 11.6% (penile bulb), 10.7% (seminal vesicle), 5.5% (right femur), 3.5% (bladder), 4.0% (normal left lung), 15.5% (cochleas), and 5.2% (chiasm). D 2 was reduced by 14.9% of prescription dose (right femur), 8.2% (penile bulb), 12.7% (proximal bronchus), 4.1% (normal left lung), 15.2% (cochleas), 10.1% (orbits), 9.1% (chiasm), 8.7% (brainstem), and 7.1% (parotids). Meanwhile, PTV homogeneity defined as D 95/D 5 improved from .92 to .95 (digital phantom), from .95 to .98 (prostate case), and from .94 to .97 (lung case), and remained constant for the head and neck case. Moreover, the FV plans are dosimetrically similar to conventional plans that use twice as many beams per fraction. Thus, FV BOO offers the potential to reduce delivery time for non-coplanar IMRT.
- Published
- 2018
32. X-ray luminescence computed tomography using a focused x-ray beam.
- Author
-
Zhang, Wei, Zhang, Wei, Lun, Michael C, Nguyen, Alex Anh-Tu, Li, Changqing, Zhang, Wei, Zhang, Wei, Lun, Michael C, Nguyen, Alex Anh-Tu, and Li, Changqing
- Abstract
Due to the low x-ray photon utilization efficiency and low measurement sensitivity of the electron multiplying charge coupled device camera setup, the collimator-based narrow beam x-ray luminescence computed tomography (XLCT) usually requires a long measurement time. We, for the first time, report a focused x-ray beam-based XLCT imaging system with measurements by a single optical fiber bundle and a photomultiplier tube (PMT). An x-ray tube with a polycapillary lens was used to generate a focused x-ray beam whose x-ray photon density is 1200 times larger than a collimated x-ray beam. An optical fiber bundle was employed to collect and deliver the emitted photons on the phantom surface to the PMT. The total measurement time was reduced to 12.5 min. For numerical simulations of both single and six fiber bundle cases, we were able to reconstruct six targets successfully. For the phantom experiment, two targets with an edge-to-edge distance of 0.4 mm and a center-to-center distance of 0.8 mm were successfully reconstructed by the measurement setup with a single fiber bundle and a PMT.
- Published
- 2017
33. Results from a Prototype Proton-CT Head Scanner
- Author
-
Johnson, RP, Johnson, RP, Bashkirov, VA, Coutrakon, G, Giacometti, V, Karbasi, P, Karonis, NT, Ordoñez, CE, Pankuch, M, Sadrozinski, HF-W, Schubert, KE, Schulte, RW, Johnson, RP, Johnson, RP, Bashkirov, VA, Coutrakon, G, Giacometti, V, Karbasi, P, Karonis, NT, Ordoñez, CE, Pankuch, M, Sadrozinski, HF-W, Schubert, KE, and Schulte, RW
- Abstract
We are exploring low-dose proton radiography and computed tomography (pCT) as techniques to improve the accuracy of proton treatment planning and to provide artifact-free images for verification and adaptive therapy at the time of treatment. Here we report on comprehensive beam test results with our prototype pCT head scanner. The detector system and data acquisition attain a sustained rate of more than a million protons individually measured per second, allowing a full CT scan to be completed in six minutes or less of beam time. In order to assess the performance of the scanner for proton radiography as well as computed tomography, we have performed numerous scans of phantoms at the Northwestern Medicine Chicago Proton Center including a custom phantom designed to assess the spatial resolution, a phantom to assess the measurement of relative stopping power, and a dosimetry phantom. Some images, performance, and dosimetry results from those phantom scans are presented together with a description of the instrument, the data acquisition system, and the calibration methods.
- Published
- 2017
34. A Real-time Image Reconstruction System for Particle Treatment Planning Using Proton Computed Tomography (pCT)
- Author
-
Ordoñez, Caesar E, Ordoñez, Caesar E, Karonis, Nicholas, Duffin, Kirk, Coutrakon, George, Schulte, Reinhard, Johnson, Robert, Pankuch, Mark, Ordoñez, Caesar E, Ordoñez, Caesar E, Karonis, Nicholas, Duffin, Kirk, Coutrakon, George, Schulte, Reinhard, Johnson, Robert, and Pankuch, Mark
- Abstract
Proton computed tomography (pCT) is a novel medical imaging modality for mapping the distribution of proton relative stopping power (RSP) in medical objects of interest. Compared to conventional X-ray computed tomography, where range uncertainty margins are around 3.5%, pCT has the potential to provide more accurate measurements to within 1%. This improved efficiency will be beneficial to proton-therapy planning and pre-treatment verification. A prototype pCT imaging device has recently been developed capable of rapidly acquiring low-dose proton radiographs of head-sized objects. We have also developed an advanced, fast image reconstruction software based on distributed computing that utilizes parallel processors and graphical processing units. The combination of fast data acquisition and fast image reconstruction will enable the availability of RSP images within minutes for use in clinical settings. The performance of our image reconstruction software has been evaluated using data collected by the prototype pCT scanner from several phantoms.
- Published
- 2017
35. Preliminary Research on Dual-Energy X-Ray Phase-Contrast Imaging
- Author
-
Han, H, Han, H, Wang, S, Gao, K, Wang, Z, Zhang, C, Yang, M, Zhang, K, Zhu, P, Han, H, Han, H, Wang, S, Gao, K, Wang, Z, Zhang, C, Yang, M, Zhang, K, and Zhu, P
- Abstract
Dual-energy X-ray absorptiometry (DEXA) has been widely applied to measure bone mineral density (BMD) and soft-tissue composition of human body. However, the use of DEXA is greatly limited for low-Z materials such as soft tissues due to their weak absorption. While X-ray phase-contrast imaging (XPCI) shows significantly improved contrast in comparison with the conventional standard absorption-based X-ray imaging for soft tissues. In this paper, we propose a novel X-ray phase-contrast method to measure the area density of low-Z materials, including a single-energy method and a dual-energy method. The single-energy method is for the area density calculation of one low-Z material, while the dual-energy method is aiming to calculate the area densities of two low-Z materials simultaneously. Comparing the experimental and simulation results with the theoretic ones, the new method proves to have the potential to replace DEXA in area density measurement. The new method sets the prerequisites for future precise and low-dose area density calculation method of low-Z materials.
- Published
- 2017
36. Preliminary Research on Dual-Energy X-Ray Phase-Contrast Imaging
- Author
-
Han, H, Han, H, Wang, S, Gao, K, Wang, Z, Zhang, C, Yang, M, Zhang, K, Zhu, P, Han, H, Han, H, Wang, S, Gao, K, Wang, Z, Zhang, C, Yang, M, Zhang, K, and Zhu, P
- Abstract
Dual-energy X-ray absorptiometry (DEXA) has been widely applied to measure bone mineral density (BMD) and soft-tissue composition of human body. However, the use of DEXA is greatly limited for low-Z materials such as soft tissues due to their weak absorption. While X-ray phase-contrast imaging (XPCI) shows significantly improved contrast in comparison with the conventional standard absorption-based X-ray imaging for soft tissues. In this paper, we propose a novel X-ray phase-contrast method to measure the area density of low-Z materials, including a single-energy method and a dual-energy method. The single-energy method is for the area density calculation of one low-Z material, while the dual-energy method is aiming to calculate the area densities of two low-Z materials simultaneously. Comparing the experimental and simulation results with the theoretic ones, the new method proves to have the potential to replace DEXA in area density measurement. The new method sets the prerequisites for future precise and low-dose area density calculation method of low-Z materials.
- Published
- 2017
37. X-ray luminescence computed tomography using a focused x-ray beam.
- Author
-
Zhang, Wei, Zhang, Wei, Lun, Michael C, Nguyen, Alex Anh-Tu, Li, Changqing, Zhang, Wei, Zhang, Wei, Lun, Michael C, Nguyen, Alex Anh-Tu, and Li, Changqing
- Abstract
Due to the low x-ray photon utilization efficiency and low measurement sensitivity of the electron multiplying charge coupled device camera setup, the collimator-based narrow beam x-ray luminescence computed tomography (XLCT) usually requires a long measurement time. We, for the first time, report a focused x-ray beam-based XLCT imaging system with measurements by a single optical fiber bundle and a photomultiplier tube (PMT). An x-ray tube with a polycapillary lens was used to generate a focused x-ray beam whose x-ray photon density is 1200 times larger than a collimated x-ray beam. An optical fiber bundle was employed to collect and deliver the emitted photons on the phantom surface to the PMT. The total measurement time was reduced to 12.5 min. For numerical simulations of both single and six fiber bundle cases, we were able to reconstruct six targets successfully. For the phantom experiment, two targets with an edge-to-edge distance of 0.4 mm and a center-to-center distance of 0.8 mm were successfully reconstructed by the measurement setup with a single fiber bundle and a PMT.
- Published
- 2017
38. Results from a Prototype Proton-CT Head Scanner
- Author
-
Johnson, RP, Johnson, RP, Bashkirov, VA, Coutrakon, G, Giacometti, V, Karbasi, P, Karonis, NT, Ordoñez, CE, Pankuch, M, Sadrozinski, HF-W, Schubert, KE, Schulte, RW, Johnson, RP, Johnson, RP, Bashkirov, VA, Coutrakon, G, Giacometti, V, Karbasi, P, Karonis, NT, Ordoñez, CE, Pankuch, M, Sadrozinski, HF-W, Schubert, KE, and Schulte, RW
- Abstract
We are exploring low-dose proton radiography and computed tomography (pCT) as techniques to improve the accuracy of proton treatment planning and to provide artifact-free images for verification and adaptive therapy at the time of treatment. Here we report on comprehensive beam test results with our prototype pCT head scanner. The detector system and data acquisition attain a sustained rate of more than a million protons individually measured per second, allowing a full CT scan to be completed in six minutes or less of beam time. In order to assess the performance of the scanner for proton radiography as well as computed tomography, we have performed numerous scans of phantoms at the Northwestern Medicine Chicago Proton Center including a custom phantom designed to assess the spatial resolution, a phantom to assess the measurement of relative stopping power, and a dosimetry phantom. Some images, performance, and dosimetry results from those phantom scans are presented together with a description of the instrument, the data acquisition system, and the calibration methods.
- Published
- 2017
39. A Real-time Image Reconstruction System for Particle Treatment Planning Using Proton Computed Tomography (pCT)
- Author
-
Ordoñez, Caesar E, Ordoñez, Caesar E, Karonis, Nicholas, Duffin, Kirk, Coutrakon, George, Schulte, Reinhard, Johnson, Robert, Pankuch, Mark, Ordoñez, Caesar E, Ordoñez, Caesar E, Karonis, Nicholas, Duffin, Kirk, Coutrakon, George, Schulte, Reinhard, Johnson, Robert, and Pankuch, Mark
- Abstract
Proton computed tomography (pCT) is a novel medical imaging modality for mapping the distribution of proton relative stopping power (RSP) in medical objects of interest. Compared to conventional X-ray computed tomography, where range uncertainty margins are around 3.5%, pCT has the potential to provide more accurate measurements to within 1%. This improved efficiency will be beneficial to proton-therapy planning and pre-treatment verification. A prototype pCT imaging device has recently been developed capable of rapidly acquiring low-dose proton radiographs of head-sized objects. We have also developed an advanced, fast image reconstruction software based on distributed computing that utilizes parallel processors and graphical processing units. The combination of fast data acquisition and fast image reconstruction will enable the availability of RSP images within minutes for use in clinical settings. The performance of our image reconstruction software has been evaluated using data collected by the prototype pCT scanner from several phantoms.
- Published
- 2017
40. X-ray luminescence computed tomography using a focused x-ray beam
- Author
-
Zhang, Wei, Zhang, Wei, Lun, Michael C, Nguyen, Alex Anh-Tu, Li, Changqing, Zhang, Wei, Zhang, Wei, Lun, Michael C, Nguyen, Alex Anh-Tu, and Li, Changqing
- Abstract
Due to the low x-ray photon utilization efficiency and low measurement sensitivity of the electron multiplying charge coupled device camera setup, the collimator-based narrow beam x-ray luminescence computed tomography (XLCT) usually requires a long measurement time. We, for the first time, report a focused x-ray beam-based XLCT imaging system with measurements by a single optical fiber bundle and a photomultiplier tube (PMT). An x-ray tube with a polycapillary lens was used to generate a focused x-ray beam whose x-ray photon density is 1200 times larger than a collimated x-ray beam. An optical fiber bundle was employed to collect and deliver the emitted photons on the phantom surface to the PMT. The total measurement time was reduced to 12.5 min. For numerical simulations of both single and six fiber bundle cases, we were able to reconstruct six targets successfully. For the phantom experiment, two targets with an edge-to-edge distance of 0.4 mm and a center-to-center distance of 0.8 mm were successfully reconstructed by the measurement setup with a single fiber bundle and a PMT.
- Published
- 2017
41. Imaging Renal Urea Handling in Rats at Millimeter Resolution using Hyperpolarized Magnetic Resonance Relaxometry.
- Author
-
Reed, Galen D, Reed, Galen D, von Morze, Cornelius, Verkman, Alan S, Koelsch, Bertram L, Chaumeil, Myriam M, Lustig, Michael, Ronen, Sabrina M, Bok, Robert A, Sands, Jeff M, Larson, Peder EZ, Wang, Zhen J, Larsen, Jan Henrik Ardenkjær, Kurhanewicz, John, Vigneron, Daniel B, Reed, Galen D, Reed, Galen D, von Morze, Cornelius, Verkman, Alan S, Koelsch, Bertram L, Chaumeil, Myriam M, Lustig, Michael, Ronen, Sabrina M, Bok, Robert A, Sands, Jeff M, Larson, Peder EZ, Wang, Zhen J, Larsen, Jan Henrik Ardenkjær, Kurhanewicz, John, and Vigneron, Daniel B
- Abstract
In vivo spin spin relaxation time (T2) heterogeneity of hyperpolarized [13C,15N2]urea in the rat kidney was investigated. Selective quenching of the vascular hyperpolarized 13C signal with a macromolecular relaxation agent revealed that a long-T2 component of the [13C,15N2]urea signal originated from the renal extravascular space, thus allowing the vascular and renal filtrate contrast agent pools of the [13C,15N2]urea to be distinguished via multi-exponential analysis. The T2 response to induced diuresis and antidiuresis was performed with two imaging agents: hyperpolarized [13C,15N2]urea and a control agent hyperpolarized bis-1,1-(hydroxymethyl)-1-13C-cyclopropane-2H8. Large T2 increases in the inner-medullar and papilla were observed with the former agent and not the latter during antidiuresis. Therefore, [13C,15N2]urea relaxometry is sensitive to two steps of the renal urea handling process: glomerular filtration and the inner-medullary urea transporter (UT)-A1 and UT-A3 mediated urea concentrating process. Simple motion correction and subspace denoising algorithms are presented to aid in the multi exponential data analysis. Furthermore, a T2-edited, ultra long echo time sequence was developed for sub-2 mm3 resolution 3D encoding of urea by exploiting relaxation differences in the vascular and filtrate pools.
- Published
- 2016
42. Imaging Renal Urea Handling in Rats at Millimeter Resolution using Hyperpolarized Magnetic Resonance Relaxometry.
- Author
-
Reed, GD, von Morze, C, Verkman, AS, Koelsch, BL, Chaumeil, MM, Lustig, M, Ronen, SM, Bok, RA, Sands, JM, Larson, PEZ, Wang, ZJ, Larsen, JHA, Kurhanewicz, J, Vigneron, DB, Reed, GD, von Morze, C, Verkman, AS, Koelsch, BL, Chaumeil, MM, Lustig, M, Ronen, SM, Bok, RA, Sands, JM, Larson, PEZ, Wang, ZJ, Larsen, JHA, Kurhanewicz, J, and Vigneron, DB
- Abstract
In vivo spin spin relaxation time (T2) heterogeneity of hyperpolarized [13C,15N2]urea in the rat kidney was investigated. Selective quenching of the vascular hyperpolarized 13C signal with a macromolecular relaxation agent revealed that a long-T2 component of the [13C,15N2]urea signal originated from the renal extravascular space, thus allowing the vascular and renal filtrate contrast agent pools of the [13C,15N2]urea to be distinguished via multi-exponential analysis. The T2 response to induced diuresis and antidiuresis was performed with two imaging agents: hyperpolarized [13C,15N2]urea and a control agent hyperpolarized bis-1,1-(hydroxymethyl)-1-13C-cyclopropane-2H8. Large T2 increases in the inner-medullar and papilla were observed with the former agent and not the latter during antidiuresis. Therefore, [13C,15N2]urea relaxometry is sensitive to two steps of the renal urea handling process: glomerular filtration and the inner-medullary urea transporter (UT)-A1 and UT-A3 mediated urea concentrating process. Simple motion correction and subspace denoising algorithms are presented to aid in the multi exponential data analysis. Furthermore, a T2-edited, ultra long echo time sequence was developed for sub-2 mm3 resolution 3D encoding of urea by exploiting relaxation differences in the vascular and filtrate pools.
- Published
- 2016
43. Imaging Renal Urea Handling in Rats at Millimeter Resolution using Hyperpolarized Magnetic Resonance Relaxometry.
- Author
-
Reed, Galen D, Reed, Galen D, von Morze, Cornelius, Verkman, Alan S, Koelsch, Bertram L, Chaumeil, Myriam M, Lustig, Michael, Ronen, Sabrina M, Bok, Robert A, Sands, Jeff M, Larson, Peder EZ, Wang, Zhen J, Larsen, Jan Henrik Ardenkjær, Kurhanewicz, John, Vigneron, Daniel B, Reed, Galen D, Reed, Galen D, von Morze, Cornelius, Verkman, Alan S, Koelsch, Bertram L, Chaumeil, Myriam M, Lustig, Michael, Ronen, Sabrina M, Bok, Robert A, Sands, Jeff M, Larson, Peder EZ, Wang, Zhen J, Larsen, Jan Henrik Ardenkjær, Kurhanewicz, John, and Vigneron, Daniel B
- Abstract
In vivo spin spin relaxation time (T2) heterogeneity of hyperpolarized [13C,15N2]urea in the rat kidney was investigated. Selective quenching of the vascular hyperpolarized 13C signal with a macromolecular relaxation agent revealed that a long-T2 component of the [13C,15N2]urea signal originated from the renal extravascular space, thus allowing the vascular and renal filtrate contrast agent pools of the [13C,15N2]urea to be distinguished via multi-exponential analysis. The T2 response to induced diuresis and antidiuresis was performed with two imaging agents: hyperpolarized [13C,15N2]urea and a control agent hyperpolarized bis-1,1-(hydroxymethyl)-1-13C-cyclopropane-2H8. Large T2 increases in the inner-medullar and papilla were observed with the former agent and not the latter during antidiuresis. Therefore, [13C,15N2]urea relaxometry is sensitive to two steps of the renal urea handling process: glomerular filtration and the inner-medullary urea transporter (UT)-A1 and UT-A3 mediated urea concentrating process. Simple motion correction and subspace denoising algorithms are presented to aid in the multi exponential data analysis. Furthermore, a T2-edited, ultra long echo time sequence was developed for sub-2 mm3 resolution 3D encoding of urea by exploiting relaxation differences in the vascular and filtrate pools.
- Published
- 2016
44. Magnetic resonance imaging of electrolysis.
- Author
-
Meir, Arie, Meir, Arie, Hjouj, Mohammad, Rubinsky, Liel, Rubinsky, Boris, Meir, Arie, Meir, Arie, Hjouj, Mohammad, Rubinsky, Liel, and Rubinsky, Boris
- Abstract
This study explores the hypothesis that Magnetic Resonance Imaging (MRI) can image the process of electrolysis by detecting pH fronts. The study has relevance to real time control of cell ablation with electrolysis. To investigate the hypothesis we compare the following MR imaging sequences: T1 weighted, T2 weighted and Proton Density (PD), with optical images acquired using pH-sensitive dyes embedded in a physiological saline agar solution phantom treated with electrolysis and discrete measurements with a pH microprobe. We further demonstrate the biological relevance of our work using a bacterial E. Coli model, grown on the phantom. The results demonstrate the ability of MRI to image electrolysis produced pH changes in a physiological saline phantom and show that these changes correlate with cell death in the E. Coli model grown on the phantom. The results are promising and invite further experimental research.
- Published
- 2015
45. Evaluation of Spatial Resolution and Noise Sensitivity of sLORETA Method for EEG Source Localization Using Low-Density Headsets
- Author
-
Saha, S, Nesterets, YI, Tahtali, M, Gureyev, TE, Saha, S, Nesterets, YI, Tahtali, M, and Gureyev, TE
- Abstract
Electroencephalography (EEG) has enjoyed considerable attention over the pastcentury and has been applied for diagnosis of epilepsy, stroke, traumatic braininjury and other disorders where 3D localization of electrical activity in thebrain is potentially of great diagnostic value. In this study we evaluate theprecision and accuracy of spatial localization of electrical activity in thebrain delivered by a popular reconstruction technique sLORETA applied to EEGdata collected by two commonly used low-density headsets with 14 and 19measurement channels, respectively. Numerical experiments were performed for arealistic head model obtained by segmentation of MRI images. The EEG sourcelocalization study was conducted with a simulated single active dipole, as wellas with two spatially separated simultaneously active dipoles, as a function ofdipole positions across the neocortex, with several different noise levels inthe EEG signals registered on the scalp. The results indicate that while thereconstruction accuracy and precision of the sLORETA method are consistentlyhigh in the case of a single active dipole, even with the low-resolution EEGconfigurations considered in the present study, successful localization is muchmore problematic in the case of two simultaneously active dipoles. Thequantitative analysis of the width of the reconstructed distributions of theelectrical activity allows us to specify the lower bound for the spatialresolution of the sLORETA-based 3D source localization in the considered cases.
- Published
- 2015
46. Magnetic resonance imaging of electrolysis.
- Author
-
Meir, Arie, Meir, Arie, Hjouj, Mohammad, Rubinsky, Liel, Rubinsky, Boris, Meir, Arie, Meir, Arie, Hjouj, Mohammad, Rubinsky, Liel, and Rubinsky, Boris
- Abstract
This study explores the hypothesis that Magnetic Resonance Imaging (MRI) can image the process of electrolysis by detecting pH fronts. The study has relevance to real time control of cell ablation with electrolysis. To investigate the hypothesis we compare the following MR imaging sequences: T1 weighted, T2 weighted and Proton Density (PD), with optical images acquired using pH-sensitive dyes embedded in a physiological saline agar solution phantom treated with electrolysis and discrete measurements with a pH microprobe. We further demonstrate the biological relevance of our work using a bacterial E. Coli model, grown on the phantom. The results demonstrate the ability of MRI to image electrolysis produced pH changes in a physiological saline phantom and show that these changes correlate with cell death in the E. Coli model grown on the phantom. The results are promising and invite further experimental research.
- Published
- 2015
47. Dictionary-learning-based reconstruction method for electron tomography.
- Author
-
Liu, Baodong, Liu, Baodong, Yu, Hengyong, Verbridge, Scott S, Sun, Lizhi, Wang, Ge, Liu, Baodong, Liu, Baodong, Yu, Hengyong, Verbridge, Scott S, Sun, Lizhi, and Wang, Ge
- Abstract
Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context.
- Published
- 2014
48. Dictionary‐learning‐based reconstruction method for electron tomography
- Author
-
Liu, Baodong, Liu, Baodong, Yu, Hengyong, Verbridge, Scott S, Sun, Lizhi, Wang, Ge, Liu, Baodong, Liu, Baodong, Yu, Hengyong, Verbridge, Scott S, Sun, Lizhi, and Wang, Ge
- Abstract
Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context.
- Published
- 2014
49. Nanodiamond landmarks for subcellular multimodal optical and electron imaging.
- Author
-
Zurbuchen, Mark A, Zurbuchen, Mark A, Lake, Michael P, Kohan, Sirus A, Leung, Belinda, Bouchard, Louis-S, Zurbuchen, Mark A, Zurbuchen, Mark A, Lake, Michael P, Kohan, Sirus A, Leung, Belinda, and Bouchard, Louis-S
- Abstract
There is a growing need for biolabels that can be used in both optical and electron microscopies, are non-cytotoxic, and do not photobleach. Such biolabels could enable targeted nanoscale imaging of sub-cellular structures, and help to establish correlations between conjugation-delivered biomolecules and function. Here we demonstrate a sub-cellular multi-modal imaging methodology that enables localization of inert particulate probes, consisting of nanodiamonds having fluorescent nitrogen-vacancy centers. These are functionalized to target specific structures, and are observable by both optical and electron microscopies. Nanodiamonds targeted to the nuclear pore complex are rapidly localized in electron-microscopy diffraction mode to enable "zooming-in" to regions of interest for detailed structural investigations. Optical microscopies reveal nanodiamonds for in-vitro tracking or uptake-confirmation. The approach is general, works down to the single nanodiamond level, and can leverage the unique capabilities of nanodiamonds, such as biocompatibility, sensitive magnetometry, and gene and drug delivery.
- Published
- 2013
50. Nanodiamond landmarks for subcellular multimodal optical and electron imaging.
- Author
-
Zurbuchen, Mark A, Zurbuchen, Mark A, Lake, Michael P, Kohan, Sirus A, Leung, Belinda, Bouchard, Louis-S, Zurbuchen, Mark A, Zurbuchen, Mark A, Lake, Michael P, Kohan, Sirus A, Leung, Belinda, and Bouchard, Louis-S
- Abstract
There is a growing need for biolabels that can be used in both optical and electron microscopies, are non-cytotoxic, and do not photobleach. Such biolabels could enable targeted nanoscale imaging of sub-cellular structures, and help to establish correlations between conjugation-delivered biomolecules and function. Here we demonstrate a sub-cellular multi-modal imaging methodology that enables localization of inert particulate probes, consisting of nanodiamonds having fluorescent nitrogen-vacancy centers. These are functionalized to target specific structures, and are observable by both optical and electron microscopies. Nanodiamonds targeted to the nuclear pore complex are rapidly localized in electron-microscopy diffraction mode to enable "zooming-in" to regions of interest for detailed structural investigations. Optical microscopies reveal nanodiamonds for in-vitro tracking or uptake-confirmation. The approach is general, works down to the single nanodiamond level, and can leverage the unique capabilities of nanodiamonds, such as biocompatibility, sensitive magnetometry, and gene and drug delivery.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.