87 results on '"Mirabela Rusu"'
Search Results
2. Identification of Factors Associated With 30-Day Readmissions After Posterior Lumbar Fusion Using Machine Learning and Traditional Models
- Author
-
Paymon G. Rezaii, Daniel Herrick, John K. Ratliff, Mirabela Rusu, David Scheinker, and Atman M. Desai
- Subjects
Orthopedics and Sports Medicine ,Neurology (clinical) - Published
- 2023
- Full Text
- View/download PDF
3. PD22-02 HISTOPATHOLOGY-INFORMED RADIOLOGY BIOMARKERS IMPROVE ARTIFICIAL INTELLIGENCE-BASED DETECTION OF AGGRESSIVE PROSTATE CANCER ON MAGNETIC RESONANCE IMAGING
- Author
-
Indrani Bhattacharya, Karin Stacke, Richard Fan, James Brooks, Mirabela Rusu, and Geoffrey Sonn
- Subjects
Urology - Published
- 2023
- Full Text
- View/download PDF
4. MP09-01 DETECTION OF CLINICALLY SIGNIFICANT PROSTATE CANCER ON MRI: A COMPARISON OF AN ARTIFICIAL INTELLIGENCE MODEL VERSUS RADIOLOGISTS
- Author
-
Simon John Christoph Soerensen, Richard E. Fan, Indrani Bhattacharya, David S. Lim, Sarir Ahmadi, Xinran Li, Sulaiman Vesal, Mirabela Rusu, and Geoffrey A. Sonn
- Subjects
Urology - Published
- 2023
- Full Text
- View/download PDF
5. MP55-12 IMPROVING AUTOMATIC DETECTION OF PROSTATE CANCER ON MRI WITH CLINICAL HISTORY
- Author
-
David S. Lim, Christian Kunder, Wei Shao, Simon J.C. Soerensen, Richard E. Fan, Pejman Ghanouni, Katherine To'o, James D. Brooks, Geoffrey A. Sonn, and Mirabela Rusu
- Subjects
Urology - Published
- 2023
- Full Text
- View/download PDF
6. PD22-03 IMPROVING PROSTATE CANCER DETECTION ON MRI WITH DEEP LEARNING, CLINICAL VARIABLES, AND RADIOMICS
- Author
-
Sara Saunders, Xinran Li, Sulaiman Vesal, Indrani Bhattacharya, Simon J. C. Soerensen, Richard E. Fan, Mirabela Rusu, and Geoffrey A. Sonn
- Subjects
Urology - Published
- 2023
- Full Text
- View/download PDF
7. The Association of Tissue Change and Treatment Success During High-intensity Focused Ultrasound Focal Therapy for Prostate Cancer
- Author
-
Yash S. Khandwala, Simon John Christoph Soerensen, Shravan Morisetty, Pejman Ghanouni, Richard E. Fan, Sulaiman Vesal, Mirabela Rusu, and Geoffrey A. Sonn
- Subjects
Urology - Abstract
Tissue preservation strategies have been increasingly used for the management of localized prostate cancer. Focal ablation using ultrasound-guided high-intensity focused ultrasound (HIFU) has demonstrated promising short and medium-term oncological outcomes. Advancements in HIFU therapy such as the introduction of tissue change monitoring (TCM) aim to further improve treatment efficacy.To evaluate the association between intraoperative TCM during HIFU focal therapy for localized prostate cancer and oncological outcomes 12 mo afterward.Seventy consecutive men at a single institution with prostate cancer were prospectively enrolled. Men with prior treatment, metastases, or pelvic radiation were excluded to obtain a final cohort of 55 men.All men underwent HIFU focal therapy followed by magnetic resonance (MR)-fusion biopsy 12 mo later. Tissue change was quantified intraoperatively by measuring the backscatter of ultrasound waves during ablation.Gleason grade group (GG) ≥2 cancer on postablation biopsy was the primary outcome. Secondary outcomes included GG ≥1 cancer, Prostate Imaging Reporting and Data System (PI-RADS) scores ≥3, and evidence of tissue destruction on post-treatment magnetic resonance imaging (MRI). A Student's t - test analysis was performed to evaluate the mean TCM scores and efficacy of ablation measured by histopathology. Multivariate logistic regression was also performed to identify the odds of residual cancer for each unit increase in the TCM score.A lower mean TCM score within the region of the tumor (0.70 vs 0.97, p = 0.02) was associated with the presence of persistent GG ≥2 cancer after HIFU treatment. Adjusting for initial prostate-specific antigen, PI-RADS score, Gleason GG, positive cores, and age, each incremental increase of TCM was associated with an 89% reduction in the odds (odds ratio: 0.11, confidence interval: 0.01-0.97) of having residual GG ≥2 cancer on postablation biopsy. Men with higher mean TCM scores (0.99 vs 0.72, p = 0.02) at the time of treatment were less likely to have abnormal MRI (PI-RADS ≥3) at 12 mo postoperatively. Cases with high TCM scores also had greater tissue destruction measured on MRI and fewer visible lesions on postablation MRI.Tissue change measured using TCM values during focal HIFU of the prostate was associated with histopathology and radiological outcomes 12 mo after the procedure.In this report, we looked at how well ultrasound changes of the prostate during focal high-intensity focused ultrasound (HIFU) therapy for the treatment of prostate cancer predict patient outcomes. We found that greater tissue change measured by the HIFU device was associated with less residual cancer at 1 yr. This tool should be used to ensure optimal ablation of the cancer and may improve focal therapy outcomes in the future.
- Published
- 2022
- Full Text
- View/download PDF
8. Evaluation of post-ablation mpMRI as a predictor of residual prostate cancer after focal high intensity focused ultrasound (HIFU) ablation
- Author
-
Yash S. Khandwala, Shravan Morisetty, Pejman Ghanouni, Richard E. Fan, Simon John Christoph Soerensen, Mirabela Rusu, and Geoffrey A. Sonn
- Subjects
Male ,Neoplasm, Residual ,Oncology ,Urology ,Prostate ,Disease Progression ,Humans ,Prostatic Neoplasms ,Prostate-Specific Antigen ,Multiparametric Magnetic Resonance Imaging ,Aged - Abstract
To evaluate the performance of multiparametric magnetic resonance imaging (mpMRI) and PSA testing in follow-up after high intensity focused ultrasound (HIFU) focal therapy for localized prostate cancer.A total of 73 men with localized prostate cancer were prospectively enrolled and underwent focal HIFU followed by per-protocol PSA and mpMRI with systematic plus targeted biopsies at 12 months after treatment. We evaluated the association between post-treatment mpMRI and PSA with disease persistence on the post-ablation biopsy. We also assessed post-treatment functional and oncological outcomes.Median age was 69 years (Interquartile Range (IQR): 66-74) and median PSA was 6.9 ng/dL (IQR: 5.3-9.9). Of 19 men with persistent GG ≥ 2 disease, 58% (11 men) had no visible lesions on MRI. In the 14 men with PIRADS 4 or 5 lesions, 7 (50%) had either no cancer or GG 1 cancer at biopsy. Men with false negative mpMRI findings had higher PSA density (0.16 vs. 0.07 ng/mLPersistent GG ≥ 2 cancer may occur after focal HIFU. mpMRI alone without confirmatory biopsy may be insufficient to rule out residual cancer, especially in patients with higher PSA density. Our study also validates previously published studies demonstrating preservation of urinary and sexual function after HIFU treatment.
- Published
- 2022
9. Registration of presurgical MRI and histopathology images from radical prostatectomy via RAPSODI
- Author
-
Richard E. Fan, Mirabela Rusu, Rewa Sood, Wei Shao, Simon John Christoph Soerensen, Geoffrey A. Sonn, Leo C Chen, Nikola C. Teslovich, Jeffrey B. Wang, Pejman Ghanouni, James D. Brooks, and Christian A. Kunder
- Subjects
Male ,medicine.medical_specialty ,medicine.medical_treatment ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Sørensen–Dice coefficient ,registration ,Prostate ,QUANTITATIVE IMAGING AND IMAGE PROCESSING ,medicine ,Humans ,Research Articles ,Fixation (histology) ,Prostatectomy ,medicine.diagnostic_test ,business.industry ,Prostatic Neoplasms ,Seminal Vesicles ,Magnetic resonance imaging ,General Medicine ,prostate cancer ,medicine.disease ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,cancer labels ,030220 oncology & carcinogenesis ,histopathology ,Histopathology ,Radiology ,business ,Research Article ,MRI - Abstract
Purpose: Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis; however, subtle differences between cancer and confounding conditions render prostate MRI interpretation challenging. The tissue collected from patients who undergo radical prostatectomy provides a unique opportunity to correlate histopathology images of the prostate with preoperative MRI to accurately map the extent of cancer from histopathology images onto MRI. We seek to develop an open-source, easy-to-use platform to align presurgical MRI and histopathology images of resected prostates in patients who underwent radical prostatectomy to create accurate cancer labels on MRI. Methods: Here, we introduce RAdiology Pathology Spatial Open-Source multi-Dimensional Integration (RAPSODI), the first open-source framework for the registration of radiology and pathology images. RAPSODI relies on three steps. First, it creates a three-dimensional (3D) reconstruction of the histopathology specimen as a digital representation of the tissue before gross sectioning. Second, RAPSODI registers corresponding histopathology and MRI slices. Third, the optimized transforms are applied to the cancer regions outlined on the histopathology images to project those labels onto the preoperative MRI. Results: We tested RAPSODI in a phantom study where we simulated various conditions, for example, tissue shrinkage during fixation. Our experiments showed that RAPSODI can reliably correct multiple artifacts. We also evaluated RAPSODI in 157 patients from three institutions that underwent radical prostatectomy and have very different pathology processing and scanning. RAPSODI was evaluated in 907 corresponding histpathology-MRI slices and achieved a Dice coefficient of 0.97 ± 0.01 for the prostate, a Hausdorff distance of 1.99 ± 0.70 mm for the prostate boundary, a urethra deviation of 3.09 ± 1.45 mm, and a landmark deviation of 2.80 ± 0.59 mm between registered histopathology images and MRI. Conclusion: Our robust framework successfully mapped the extent of cancer from histopathology slices onto MRI providing labels from training machine learning methods to detect cancer on MRI.
- Published
- 2020
- Full Text
- View/download PDF
10. Deep learning-based pseudo-mass spectrometry imaging analysis for precision medicine
- Author
-
Xiaotao Shen, Wei Shao, Chuchu Wang, Liang Liang, Songjie Chen, Sai Zhang, Mirabela Rusu, and Michael P Snyder
- Subjects
Deep Learning ,Metabolomics ,Reproducibility of Results ,Precision Medicine ,Molecular Biology ,Mass Spectrometry ,Information Systems ,Chromatography, Liquid - Abstract
Liquid chromatography-mass spectrometry (LC-MS) based untargeted metabolomics provides systematic profiling of metabolic. Yet its applications in precision medicine (disease diagnosis) have been limited by several challenges, including metabolite identification, information loss, and low reproducibility. Here, we present the deepPseudoMSI project (https://www.deeppseudomsi.org/), which converts LC-MS raw data to pseudo-MS images and then processes them by deep learning for precision medicine, such as disease diagnosis. Extensive tests based on real data demonstrated the superiority of deepPseudoMSI over traditional approaches and the capacity of our method to achieve an accurate individualized diagnosis. Our framework lays the foundation for future metabolic-based precision medicine.
- Published
- 2022
11. Bridging the gap between prostate radiology and pathology through machine learning
- Author
-
Indrani Bhattacharya, David S. Lim, Han Lin Aung, Xingchen Liu, Arun Seetharaman, Christian A. Kunder, Wei Shao, Simon J. C. Soerensen, Richard E. Fan, Pejman Ghanouni, Katherine J. To'o, James D. Brooks, Geoffrey A. Sonn, and Mirabela Rusu
- Subjects
FOS: Computer and information sciences ,Male ,Prostatectomy ,Computer Vision and Pattern Recognition (cs.CV) ,education ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,Prostate ,Prostatic Neoplasms ,General Medicine ,Electrical Engineering and Systems Science - Image and Video Processing ,Magnetic Resonance Imaging ,Machine Learning ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Radiology - Abstract
Prostate cancer is the second deadliest cancer for American men. While Magnetic Resonance Imaging (MRI) is increasingly used to guide targeted biopsies for prostate cancer diagnosis, its utility remains limited due to high rates of false positives and false negatives as well as low inter-reader agreements. Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. In this study, we compare different labeling strategies, namely, pathology-confirmed radiologist labels, pathologist labels on whole-mount histopathology images, and lesion-level and pixel-level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel-level Gleason patterns) on whole-mount histopathology images. We analyse the effects these labels have on the performance of the trained machine learning models. Our experiments show that (1) radiologist labels and models trained with them can miss cancers, or underestimate cancer extent, (2) digital pathologist labels and models trained with them have high concordance with pathologist labels, and (3) models trained with digital pathologist labels achieve the best performance in prostate cancer detection in two different cohorts with different disease distributions, irrespective of the model architecture used. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter- and intra-reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI., Comment: Indrani Bhattacharya and David S. Lim contributed equally as first authors. Geoffrey A. Sonn and Mirabela Rusu contributed equally as senior authors
- Published
- 2022
12. PD40-03 DIGITAL PATHOLOGY LABELS PERFORM BETTER THAN RADIOLOGIST LABELS FOR TRAINING DEEP LEARNING MODELS TO DETECT PROSTATE CANCER ON MRI
- Author
-
Indrani Bhattacharya, David Lim, Christian Kunder, Han Lin Aung, Xingchen Liu, Wei Shao, Simon Soerensen, Richard Fan, Pejman Ghanouni, Katherine To'o, James Brooks, Mirabela Rusu, and Geoffrey Sonn
- Subjects
Urology - Published
- 2022
- Full Text
- View/download PDF
13. MP55-04 THE PREDICTIVE VALUE OF TISSUE CHANGE DURING ULTRASOUND GUIDED HIGH INTENSITY FOCUSED ULTRASOUND (USGHIFU) FOCAL THERAPY ON TREATMENT SUCCESS
- Author
-
Yash Khandwala, Shravan Morisetty, Sulaiman Vesal, Richard E. Fan, Ahmed El Kaffas, Pejman Ghanouni, Mirabela Rusu, and Geoffrey A. Sonn
- Subjects
Urology - Published
- 2022
- Full Text
- View/download PDF
14. Computational Detection of Extraprostatic Extension of Prostate Cancer on Multiparametric MRI Using Deep Learning
- Author
-
Ştefania L. Moroianu, Indrani Bhattacharya, Arun Seetharaman, Wei Shao, Christian A. Kunder, Avishkar Sharma, Pejman Ghanouni, Richard E. Fan, Geoffrey A. Sonn, and Mirabela Rusu
- Subjects
Cancer Research ,Oncology ,extraprostatic extension ,computer-aided diagnosis ,deep learning - Abstract
The localization of extraprostatic extension (EPE), i.e., local spread of prostate cancer beyond the prostate capsular boundary, is important for risk stratification and surgical planning. However, the sensitivity of EPE detection by radiologists on MRI is low (57% on average). In this paper, we propose a method for computational detection of EPE on multiparametric MRI using deep learning. Ground truth labels of cancers and EPE were obtained in 123 patients (38 with EPE) by registering pre-surgical MRI with whole-mount digital histopathology images from radical prostatectomy. Our approach has two stages. First, we trained deep learning models using the MRI as input to generate cancer probability maps both inside and outside the prostate. Second, we built an image post-processing pipeline that generates predictions for EPE location based on the cancer probability maps and clinical knowledge. We used five-fold cross-validation to train our approach using data from 74 patients and tested it using data from an independent set of 49 patients. We compared two deep learning models for cancer detection: (i) UNet and (ii) the Correlated Signature Network for Indolent and Aggressive prostate cancer detection (CorrSigNIA). The best end-to-end model for EPE detection, which we call EPENet, was based on the CorrSigNIA cancer detection model. EPENet was successful at detecting cancers with extraprostatic extension, achieving a mean area under the receiver operator characteristic curve of 0.72 at the patient-level. On the test set, EPENet had 80.0% sensitivity and 28.2% specificity at the patient-level compared to 50.0% sensitivity and 76.9% specificity for the radiologists. To account for spatial location of predictions during evaluation, we also computed results at the sextant-level, where the prostate was divided into sextants according to standard systematic 12-core biopsy procedure. At the sextant-level, EPENet achieved mean sensitivity 61.1% and mean specificity 58.3%. Our approach has the potential to provide the location of extraprostatic extension using MRI alone, thus serving as an independent diagnostic aid to radiologists and facilitating treatment planning.
- Published
- 2022
15. Integrating zonal priors and pathomic MRI biomarkers for improved aggressive prostate cancer detection on MRI
- Author
-
Indrani Bhattacharya, Wei Shao, Simon J. C. Soerensen, Richard E. Fan, Jeffrey B. Wang, Christian A. Kunder, Pejman Ghanouni, Geoffrey A. Sonn, and Mirabela Rusu
- Published
- 2022
- Full Text
- View/download PDF
16. Correlation of
- Author
-
Heying, Duan, Lucia, Baratto, Richard E, Fan, Simon John Christoph, Soerensen, Tie, Liang, Benjamin Inbeh, Chung, Alan Eih Chih, Thong, Harcharan, Gill, Christian, Kunder, Tanya, Stoyanova, Mirabela, Rusu, Andreas M, Loening, Pejman, Ghanouni, Guido A, Davidzon, Farshad, Moradi, Geoffrey A, Sonn, and Andrei, Iagaru
- Subjects
Male ,Receptors, Bombesin ,Prostatectomy ,Positron-Emission Tomography ,Positron Emission Tomography Computed Tomography ,Humans ,Prostatic Neoplasms ,Gallium Radioisotopes ,Oligopeptides - Published
- 2022
17. A review of artificial intelligence in prostate cancer detection on imaging
- Author
-
Indrani Bhattacharya, Yash S. Khandwala, Sulaiman Vesal, Wei Shao, Qianye Yang, Simon J.C. Soerensen, Richard E. Fan, Pejman Ghanouni, Christian A. Kunder, James D. Brooks, Yipeng Hu, Mirabela Rusu, and Geoffrey A. Sonn
- Subjects
Urology - Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
- Published
- 2022
18. Domain Generalization for Prostate Segmentation in Transrectal Ultrasound Images: A Multi-center Study
- Author
-
Sulaiman Vesal, Iani Gayo, Indrani Bhattacharya, Shyam Natarajan, Leonard S. Marks, Dean C Barratt, Richard E. Fan, Yipeng Hu, Geoffrey A. Sonn, and Mirabela Rusu
- Subjects
Male ,FOS: Computer and information sciences ,Radiological and Ultrasound Technology ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Prostate ,Computer Science - Computer Vision and Pattern Recognition ,Health Informatics ,Electrical Engineering and Systems Science - Image and Video Processing ,Computer Graphics and Computer-Aided Design ,Magnetic Resonance Imaging ,Article ,Pelvis ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer Vision and Pattern Recognition ,Neural Networks, Computer ,Ultrasonography - Abstract
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of $94.0\pm0.03$ and Hausdorff Distance (HD95) of 2.28 $mm$ in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: $91.0\pm0.03$; HD95: 3.7$mm$ and Dice: $82.0\pm0.03$; HD95: 7.1 $mm$)., Comment: Accepted to the journal of Medical Image Analysis (MedIA)
- Published
- 2022
- Full Text
- View/download PDF
19. Collaborative Quantization Embeddings for Intra-subject Prostate MR Image Registration
- Author
-
Ziyi Shen, Qianye Yang, Yuming Shen, Francesco Giganti, Vasilis Stavrinides, Richard Fan, Caroline Moore, Mirabela Rusu, Geoffrey Sonn, Philip Torr, Dean Barratt, and Yipeng Hu
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Image registration is useful for quantifying morphological changes in longitudinal MR images from prostate cancer patients. This paper describes a development in improving the learning-based registration algorithms, for this challenging clinical application often with highly variable yet limited training data. First, we report that the latent space can be clustered into a much lower dimensional space than that commonly found as bottleneck features at the deep layer of a trained registration network. Based on this observation, we propose a hierarchical quantization method, discretizing the learned feature vectors using a jointly-trained dictionary with a constrained size, in order to improve the generalisation of the registration networks. Furthermore, a novel collaborative dictionary is independently optimised to incorporate additional prior information, such as the segmentation of the gland or other regions of interest, in the latent quantized space. Based on 216 real clinical images from 86 prostate cancer patients, we show the efficacy of both the designed components. Improved registration accuracy was obtained with statistical significance, in terms of both Dice on gland and target registration error on corresponding landmarks, the latter of which achieved 5.46 mm, an improvement of 28.7\% from the baseline without quantization. Experimental results also show that the difference in performance was indeed minimised between training and testing data., Comment: preprint version, accepted for MICCAI 2022 (25th International Conference on Medical Image Computing and Computer Assisted Intervention)
- Published
- 2022
- Full Text
- View/download PDF
20. Image quality assessment for machine learning tasks using meta-reinforcement learning
- Author
-
Shaheer U. Saeed, Yunguan Fu, Vasilis Stavrinides, Zachary M.C. Baum, Qianye Yang, Mirabela Rusu, Richard E. Fan, Geoffrey A. Sonn, J. Alison Noble, Dean C. Barratt, and Yipeng Hu
- Subjects
Male ,FOS: Computer and information sciences ,Radiological and Ultrasound Technology ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,Health Informatics ,Electrical Engineering and Systems Science - Image and Video Processing ,Computer Graphics and Computer-Aided Design ,Machine Learning ,Image Processing, Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Radiology, Nuclear Medicine and imaging ,Neural Networks, Computer ,Computer Vision and Pattern Recognition ,Algorithms ,Ultrasonography - Abstract
In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images., Comment: Accepted to Medical Image Analysis; Final published version available at: https://doi.org/10.1016/j.media.2022.102427
- Published
- 2022
- Full Text
- View/download PDF
21. The Learn2Reg 2021 MICCAI Grand Challenge (PIMed Team)
- Author
-
Wei Shao, Sulaiman Vesal, David Lim, Cynthia Li, Negar Golestani, Ahmed Alsinan, Richard Fan, Geoffrey Sonn, and Mirabela Rusu
- Published
- 2022
- Full Text
- View/download PDF
22. Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning
- Author
-
Alessa Hering, Lasse Hansen, Tony C. W. Mok, Albert C. S. Chung, Hanna Siebert, Stephanie Hager, Annkristin Lange, Sven Kuckertz, Stefan Heldmann, Wei Shao, Sulaiman Vesal, Mirabela Rusu, Geoffrey Sonn, Theo Estienne, Maria Vakalopoulou, Luyi Han, Yunzhi Huang, Pew-Thian Yap, Mikael Brudfors, Yael Balbastre, Samuel Joutard, Marc Modat, Gal Lifshitz, Dan Raviv, Jinxin Lv, Qiang Li, Vincent Jaouen, Dimitris Visvikis, Constance Fourcade, Mathieu Rubeaux, Wentao Pan, Zhe Xu, Bailiang Jian, Francesca De Benetti, Marek Wodzinski, Niklas Gunnarsson, Jens Sjolund, Daniel Grzech, Huaqi Qiu, Zeju Li, Alexander Thorley, Jinming Duan, Christoph Grosbrohmer, Andrew Hoopes, Ingerid Reinertsen, Yiming Xiao, Bennett Landman, Yuankai Huo, Keelin Murphy, Nikolas Lessmann, Bram van Ginneken, Adrian V. Dalca, and Mattias P. Heinrich
- Subjects
FOS: Computer and information sciences ,Radiological and Ultrasound Technology ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Medicinsk bildbehandling ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,ddc ,Computer Science Applications ,Medical Image Processing ,All institutes and research themes of the Radboud University Medical Center ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Software ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration ap- proaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi- task medical image registration data set for comprehen- sive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https:// learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, ac- curacy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image regis- tration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state- of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
- Published
- 2021
23. Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework
- Author
-
Nikola C. Teslovich, Simon John Christoph Soerensen, Christian A. Kunder, Wei Shao, Mirabela Rusu, James D. Brooks, Leo C Chen, Richard E. Fan, Pejman Ghanouni, Jeffrey B. Wang, Geoffrey A. Sonn, Arun Seetharaman, and Indrani Bhattacharya
- Subjects
Male ,Pathology ,medicine.medical_specialty ,medicine.medical_treatment ,Health Informatics ,Article ,Prostate cancer ,Deep Learning ,Prostate ,Biopsy ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Radiation treatment planning ,Prostatectomy ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Cancer ,Prostatic Neoplasms ,Magnetic resonance imaging ,medicine.disease ,Computer Graphics and Computer-Aided Design ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Computer-aided diagnosis ,Computer Vision and Pattern Recognition ,business - Abstract
Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.
- Published
- 2021
24. MP43-03 MACHINE LEARNING IMPROVES DETECTION OF CLINICALLY SIGNIFICANT PROSTATE CANCER IN EQUIVOCAL LESIONS ON MPMRI
- Author
-
Geoffrey A. Sonn, Maxime D. Rappaport, Richard E. Fan, Mirabela Rusu, and Indrani Bhattacharya
- Subjects
medicine.medical_specialty ,Prostate cancer ,medicine.anatomical_structure ,business.industry ,Prostate ,Urology ,Medicine ,Multiparametric MRI ,Radiology ,business ,medicine.disease - Abstract
INTRODUCTION AND OBJECTIVE:PIRADS version 2.1 is a standardized method for evaluating prostate multiparametric MRI (mpMRI). While a PIRADS score of 5 is highly specific for clinically significant p...
- Published
- 2021
- Full Text
- View/download PDF
25. PD56-03 EXTERNAL VALIDATION OF AN ARTIFICIAL INTELLIGENCE ALGORITHM FOR PROSTATE CANCER GLEASON GRADING AND TUMOR QUANTIFICATION
- Author
-
Mirabela Rusu, Christian A. Kunder, John P. Higgins, Hriday P. Bhambhvani, Bogdana Schmidt, Chia Sui Kao, Geoffrey A. Sonn, and Richard E. Fan
- Subjects
Oncology ,Prostate cancer ,medicine.medical_specialty ,business.industry ,Urology ,Internal medicine ,medicine ,External validation ,Gleason grading ,medicine.disease ,business - Published
- 2021
- Full Text
- View/download PDF
26. PD10-01 AUTOMATED IDENTIFICATION OF AGGRESSIVE PROSTATE CANCER USING AN MRI-PATHOLOGY CORRELATION AND DEEP LEARNING FRAMEWORK
- Author
-
Arun Seetharaman, Wei Shao, Indrani Bhattacharya, Leo C Chen, Jeffrey B. Wang, Richard E. Fan, Christian A. Kunder, Pejman Ghanouni, James D. Brooks, Simon John Christoph Soerensen, Nikola C. Teslovich, Geoffrey A. Sonn, and Mirabela Rusu
- Subjects
Oncology ,medicine.medical_specialty ,business.industry ,Urology ,Deep learning ,nutritional and metabolic diseases ,medicine.disease ,nervous system diseases ,Correlation ,Prostate cancer ,Internal medicine ,medicine ,Identification (biology) ,Artificial intelligence ,business - Abstract
INTRODUCTION AND OBJECTIVE:MRI is a powerful tool for prostate cancer diagnosis, yet interobserver variability remains problematic and false positive and false negative findings are common. Trainin...
- Published
- 2021
- Full Text
- View/download PDF
27. PD56-11 RAW MICRO-ULTRASOUND TISSUE CHARACTERIZATION USING CONVOLUTION NEURAL NETWORKS TO DIFFERENTIATION BENIGN TISSUE FROM CLINICALLY SIGNIFICANT PROSTATE CANCER
- Author
-
Mirabela Rusu, Geoffrey A. Sonn, Brian Wodlinger, Aya Kamaya, Richard E. Fan, and Ahmed El Kaffas
- Subjects
Prostate cancer ,Artificial neural network ,business.industry ,Urology ,medicine ,Tissue characterization ,medicine.disease ,business ,Micro ultrasound ,Biomedical engineering ,Convolution - Published
- 2021
- Full Text
- View/download PDF
28. MP46-02 DETAILED ANALYSIS OF MRI CONCORDANCE WITH PROSTATECTOMY HISTOPATHOLOGY USING DEEP LEARNING-BASED DIGITAL PATHOLOGY
- Author
-
Bogdana Schmidt, Mirabela Rusu, Richard E. Fan, Lukas Hockman, Indrani Bhattacharya, and Geoffrey A. Sonn
- Subjects
medicine.medical_specialty ,business.industry ,Prostatectomy ,Urology ,Concordance ,medicine.medical_treatment ,Digital pathology ,Cancer detection ,medicine.disease ,Focal therapy ,Prostate cancer ,medicine.anatomical_structure ,Prostate ,Medicine ,Histopathology ,Radiology ,business - Abstract
INTRODUCTION AND OBJECTIVE:Focal therapy for prostate cancer relies heavily on prostate MRI, highlighting the weaknesses of MRI alone for cancer detection. Prior studies comparing MRI to pathology ...
- Published
- 2021
- Full Text
- View/download PDF
29. MP43-02 LESSONS LEARNED IN APPLYING DEEP LEARNING TO FACILITATE PROSTATE MR-US FUSION BIOPSY WORKFLOW
- Author
-
Simon John Christoph Soerensen, Arun Seetharaman, Michael Borre, Indrani Bhattacharya, Geoffrey A. Sonn, Richard E. Fan, Wei Shao, Katherine J. To'o, Alan Thong, and Mirabela Rusu
- Subjects
medicine.medical_specialty ,medicine.anatomical_structure ,Workflow ,business.industry ,Prostate ,Urology ,Deep learning ,medicine ,Medical physics ,Artificial intelligence ,business ,Fusion Biopsy - Published
- 2021
- Full Text
- View/download PDF
30. Automated detection of aggressive and indolent prostate cancer on magnetic resonance imaging
- Author
-
Arun Seetharaman, Pejman Ghanouni, Jeffrey B. Wang, Richard E. Fan, Mirabela Rusu, Wei Shao, Indrani Bhattacharya, Christian A. Kunder, Geoffrey A. Sonn, Leo C Chen, Katherine J. To'o, Nikola C. Teslovich, Simon John Christoph Soerensen, and James D. Brooks
- Subjects
Male ,medicine.medical_specialty ,medicine.medical_treatment ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,Prostate ,QUANTITATIVE IMAGING AND IMAGE PROCESSING ,Biopsy ,medicine ,Humans ,Research Articles ,Prostatectomy ,medicine.diagnostic_test ,Receiver operating characteristic ,business.industry ,Prostatic Neoplasms ,Cancer ,deep learning ,Magnetic resonance imaging ,General Medicine ,medicine.disease ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Gleason grading ,030220 oncology & carcinogenesis ,Histopathology ,Radiology ,aggressive vs. indolent cancer ,Neoplasm Grading ,business ,Research Article ,prostate MRI - Abstract
Purpose: While multi-parametric magnetic resonance imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern (Formula presented.) 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy. Methods: We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtained by registering MRI with whole-mount digital histopathology images from patients who underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients who underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including six patients with normal MRI and no cancer, 23 patients who underwent radical prostatectomy, and 293 patients who underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists. Results: Our model detected clinically significant lesions with an area under the receiver operator characteristics curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer. Conclusions: Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
- Published
- 2021
- Full Text
- View/download PDF
31. 3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction
- Author
-
Christian A. Kunder, Anugayathri Jawahar, Wei Shao, Richard E. Fan, Simon John Christoph Soerensen, Pejman Ghanouni, Jeffrey B. Wang, Rewa Sood, Nikola C. Teslovich, Mirabela Rusu, Nikhil Madhuripan, Geoffrey A. Sonn, and James D. Brooks
- Subjects
Male ,medicine.medical_specialty ,Generative adversarial networks ,Computer science ,HISTOLOGICAL SECTIONS ,medicine.medical_treatment ,Health Informatics ,images onto MRI ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Prostate ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Volume reconstruction ,Projection (set theory) ,nbsp ,3d registration ,Radiological and Ultrasound Technology ,Prostatectomy ,Mapping cancer from histopathology& ,Prostatic Neoplasms ,Super-resolution registration ,Magnetic Resonance Imaging ,Computer Graphics and Computer-Aided Design ,Superresolution ,Mapping cancer from histopathology images onto MRI ,medicine.anatomical_structure ,Radiology pathology fusion ,Histopathology ,Computer Vision and Pattern Recognition ,Radiology ,030217 neurology & neurosurgery ,Interpolation - Abstract
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
- Published
- 2021
- Full Text
- View/download PDF
32. Intensity normalization of prostate MRIs using conditional generative adversarial networks for cancer detection
- Author
-
Geoffrey A. Sonn, Arun Seetharaman, Mirabela Rusu, Stefania Moroianu, Indrani Bhattacharya, and Thomas DeSilvio
- Subjects
Scanner ,medicine.diagnostic_test ,Computer science ,business.industry ,Deep learning ,Pattern recognition ,Magnetic resonance imaging ,Cancer detection ,medicine.disease ,Edge detection ,Intensity (physics) ,Prostate cancer ,medicine.anatomical_structure ,Prostate ,medicine ,Artificial intelligence ,business - Abstract
Magnetic Resonance Imaging (MRI) is increasingly used to localize prostate cancer, but the subtle features of cancer vs. normal tissue renders the interpretation of MRI challenging. Computational approaches have been proposed to detect prostate cancer, yet variation in intensity distribution across different scanners, and even on the same scanner, poses significant challenges to image analysis via computational tools, such as deep learning. In this study, we developed a conditional generative adversarial network (GAN) to normalize intensity distributions on prostate MRI. We used three methods to evaluate our GAN-normalization. First, we qualitatively compared the intensity of GAN-normalized images to the intensity distributions of statistically normalized images. Second, we visually examined the GAN-normalized images to ensure the appearance of the prostate and other structures were preserved. Finally, we quantitatively evaluated the performance of deep learning holistically nested edge detection (HED) networks to identify prostate cancer on MRI when using raw, statistically normalized, and GAN-normalized images. We found the detection network trained on GAN-normalized images achieved similar accuracy and area under the curve (AUC) scores when compared to the detection networks trained on raw and statistically normalized images. Conditional GANs may hence be an effective tool for normalizing intensity distribution on MRI and can be utilized to train downstream deep learning tasks.
- Published
- 2021
- Full Text
- View/download PDF
33. Detecting invasive breast carcinoma on dynamic contrast-enhanced MRI
- Author
-
Mirabela Rusu and Stefania Moroianu
- Subjects
medicine.medical_specialty ,Ground truth ,business.industry ,Deep learning ,education ,medicine.disease ,Surgical pathology ,Lesion ,Breast cancer ,Minimum bounding box ,Dynamic contrast-enhanced MRI ,Medicine ,Artificial intelligence ,Radiology ,medicine.symptom ,business ,Transfer of learning - Abstract
Deep learning models have the potential to improve prediction of the presence of invasive breast cancer on MR images. Here we present a transfer learning framework for classifying dynamic contrast-enhanced MR images in two classes: those that have invasive breast carcinoma and those that are noninvasive (including benign findings and indolent cancers). We build and train several models based on a pre-trained VGG16 network and found that fine-tuning the last convolutional block is the best strategy for our small data scenario. Our model was trained and evaluated using 81 female patients who had a pre-operative MRI followed by surgery. All lesions have ground truth labels from the surgical pathology reports. We used a bounding box to generate cropped images centered on the lesion and extract multiple slices per lesion. Our network achieved an AUC of 0.83±0.05, sensitivity 0.83±0.16 and specificity 0.71±0.11 in predicting the presence of invasive cancer in the breast. We compared our results with state-of-the-art methods and found that our model is more accurate in distinguishing invasive from noninvasive lesions. Finally, the visual inspection of the class activation maps allowed us to better understand the decision process of our deep learning classifiers.
- Published
- 2021
- Full Text
- View/download PDF
34. Clinically significant prostate cancer detection on MRI with self-supervised learning using image context restoration
- Author
-
Richard E. Fan, Pejman Ghanouni, Arun Seetharaman, Amir Bolous, Simon John Christoph Soerensen, Mirabela Rusu, Leo C Chen, Indrani Bhattacharya, Geoffrey A. Sonn, Mazurowski, Maciej A., and Drukker, Karen
- Subjects
Self-supervised learning ,business.industry ,Computer science ,Deep learning ,Pretraining ,Cancer ,Context (language use) ,Pattern recognition ,Image context restoration ,medicine.disease ,Convolutional neural network ,U-Net ,Prostate cancer ,medicine.anatomical_structure ,Similarity (network science) ,Prostate ,Test set ,medicine ,Artificial intelligence ,HED ,business ,MRI ,Prostate cancer detection, deep learning - Abstract
Prostate MRI is increasingly used to help localize and target prostate cancer. Yet, the subtle differences in MRI appearance of cancer compared to normal tissue renders MRI interpretation challenging. Deep learning methods hold promise in automating the detection of prostate cancer on MRI, however such approaches require large, well-curated datasets. Although existing methods that employed fully convolutional neural networks have shown promising results, the lack of labeled data can reduce the generalization of these models. Self-supervised learning provides a promising avenue to learn semantic features from unlabeled data. In this study, we apply the self-supervised strategy of image context restoration to detect prostate cancer on MRI and show this improves model performance for two different architectures (U-Net and Holistically Nested Edge Detector) compared to their purely supervised counterparts. We train our models on MRI exams from 381 men with biopsy confirmed cancer. Our study showed self-supervised models outperform randomly initialized models on an independent test set in a variety of training settings. We performed 3 experiments, where we trained with 5%, 25% and 100% of our labeled data, and observed that the U-Net based pre-training and downstream task outperformed other models. We observed the best improvements when training with 5% of the labeled training data, our selfsupervised U-Nets improve per-pixel Area Under the Curve (AUC, 0.71 vs 0.83) and Dice Similarity coefficient (0.19 vs 0.53). When training with 100% of the data, our U-Net-based pretraining and detection achieved an AUC of 0.85 and Dice similarity coefficient of 0.57.
- Published
- 2021
- Full Text
- View/download PDF
35. Weakly Supervised Registration of Prostate MRI and Histopathology Images
- Author
-
Wei Shao, Jeffrey B. Wang, Simon John Christoph Soerensen, Indrani Bhattacharya, Richard E. Fan, Geoffrey A. Sonn, Pejman Ghanouni, Mirabela Rusu, James D. Brooks, and Christian A. Kunder
- Subjects
medicine.medical_specialty ,Computer science ,business.industry ,Deep learning ,education ,Normal tissue ,Image registration ,Early detection ,Pattern recognition ,medicine.disease ,Prostate cancer ,medicine.anatomical_structure ,Prostate ,medicine ,Histopathology ,Affine transformation ,Artificial intelligence ,business - Abstract
The interpretation of prostate MRI suffers from low agreement across radiologists due to the subtle differences between cancer and normal tissue. Image registration addresses this issue by accurately mapping the ground-truth cancer labels from surgical histopathology images onto MRI. Cancer labels achieved by image registration can be used to improve radiologists’ interpretation of MRI by training deep learning models for early detection of prostate cancer. A major limitation of current automated registration approaches is that they require manual prostate segmentations, which is a time-consuming task, prone to errors. This paper presents a weakly supervised approach for affine and deformable registration of MRI and histopathology images without requiring prostate segmentations. We used manual prostate segmentations and mono-modal synthetic image pairs to train our registration networks to align prostate boundaries and local prostate features. Although prostate segmentations were used during the training of the network, such segmentations were not needed when registering unseen images at inference time. We trained and validated our registration network with 135 and 10 patients from an internal cohort, respectively. We tested the performance of our method using 16 patients from the internal cohort and 22 patients from an external cohort. The results show that our weakly supervised method has achieved significantly higher registration accuracy than a state-of-the-art method run without prostate segmentations. Our deep learning framework will ease the registration of MRI and histopathology images by obviating the need for prostate segmentations.
- Published
- 2021
- Full Text
- View/download PDF
36. Adaptable Image Quality Assessment Using Meta-Reinforcement Learning of Task Amenability
- Author
-
Yipeng Hu, Qianye Yang, J. Alison Noble, Dean C. Barratt, Zachary M. C. Baum, Geoffrey A. Sonn, Vasilis Stavrinides, Richard E. Fan, Shaheer U. Saeed, Mirabela Rusu, and Yunguan Fu
- Subjects
Contextual image classification ,Artificial neural network ,Image quality ,Computer science ,business.industry ,Deep learning ,Machine learning ,computer.software_genre ,030218 nuclear medicine & medical imaging ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,Reinforcement learning ,Markov decision process ,Artificial intelligence ,Transfer of learning ,business ,computer ,030217 neurology & neurosurgery - Abstract
The performance of many medical image analysis tasks are strongly associated with image data quality. When developing modern deep learning algorithms, rather than relying on subjective (human-based) image quality assessment (IQA), task amenability potentially provides an objective measure of task-specific image quality. To predict task amenability, an IQA agent is trained using reinforcement learning (RL) with a simultaneously optimised task predictor, such as a classification or segmentation neural network. In this work, we develop transfer learning or adaptation strategies to increase the adaptability of both the IQA agent and the task predictor so that they are less dependent on high-quality, expert-labelled training data. The proposed transfer learning strategy re-formulates the original RL problem for task amenability in a meta-reinforcement learning (meta-RL) framework. The resulting algorithm facilitates efficient adaptation of the agent to different definitions of image quality, each with its own Markov decision process environment including different images, labels and an adaptable task predictor. Our work demonstrates that the IQA agents pre-trained on non-expert task labels can be adapted to predict task amenability as defined by expert task labels, using only a small set of expert labels. Using 6644 clinical ultrasound images from 249 prostate cancer patients, our results for image classification and segmentation tasks show that the proposed IQA method can be adapted using data with as few as respective 19.7\(\%\) and 29.6\(\%\) expert-reviewed consensus labels and still achieve comparable IQA and task performance, which would otherwise require a training dataset with 100\(\%\) expert labels.
- Published
- 2021
- Full Text
- View/download PDF
37. ProGNet:Prostate gland segmentation on MRI with deep learning
- Author
-
Arun Seetharaman, Indrani Bhattacharya, Wei Shao, Leo C Chen, Michael Borre, Mirabela Rusu, Benjamin I. Chung, Geoffrey A. Sonn, Katherine J. To'o, Richard E. Fan, Simon John Christoph Soerensen, Isgum, Ivana, and Landman, Bennett A.
- Subjects
Prostate biopsy ,medicine.diagnostic_test ,business.industry ,Computer science ,Deep learning ,Ultrasound ,Context (language use) ,Pattern recognition ,Magnetic resonance imaging ,Holistically-Nested Edge Detector ,Prostate Segmentation ,medicine.disease ,Magnetic Resonance Imaging ,U-Net ,Prostate cancer ,medicine.anatomical_structure ,Deep Learning ,Prostate ,medicine ,Segmentation ,Artificial intelligence ,business - Abstract
The use of magnetic resonance-ultrasound fusion targeted biopsy improves diagnosis of aggressive prostate cancer. Fusion of ultrasound & magnetic resonance images (MRI) requires accurate prostate segmentations. In this paper, we developed a 2.5 dimensional deep learning model, ProGNet, to segment the prostate on T2-weighted magnetic resonance imaging (MRI). ProGNet is an optimized U-Net model that weighs three adjacent slices in each MRI sequence to segment the prostate in a 2.5D context. We trained ProGNet on 529 cases where experts annotated the whole gland (WG) on axial T2-weighted MRI prior to targeted prostate biopsy. In 132 cases, experts also annotated the central gland (CG) on MRI. After five-fold cross-validation, we found that for WG segmentation, ProGNet had a mean Dice similarity coefficient (DSC) of 0.91±0.02, sensitivity of 0.89±0.03, specificity of 0.97±0.00, and an accuracy of 0.95±0.01. For CG segmentation, ProGNet achieved a mean DSC 0.86±0.01, sensitivity of 0.84±0.03, specificity of 0.99±0.01, and an accuracy of 0.96±0.01. We then tested the generalizability of the model on the 60-case NCI-ISBI 2013 challenge dataset and on a local, independent 61-case test set. We achieved DSCs of 0.81±0.02 and 0.72±0.02 for WG and CG segmentation on the NCI-ISBI 2013 challenge dataset, and 0.83±0.01 and 0.75±0.01 for WG and CG segmentation on the local dataset. Model performance was excellent and outperformed state-of-art U-Net and holistically-nested edge detector (HED) networks in all three datasets.
- Published
- 2021
- Full Text
- View/download PDF
38. MP70-01 OPTIMIZING ABLATION MARGINS FOR PROSTATE CANCER FOCAL THERAPY
- Author
-
Mirabela Rusu, Richard E. Fan, Leo C Chen, Pejman Ghanouni, Nikola C. Teslovich, and Geoffrey A. Sonn
- Subjects
medicine.medical_specialty ,integumentary system ,Prostatectomy ,business.industry ,Urology ,medicine.medical_treatment ,Ablation ,medicine.disease ,Focal therapy ,Prostate cancer ,medicine ,business ,Benign prostate - Abstract
INTRODUCTION AND OBJECTIVE:Image-guided destruction of individual prostate cancer foci while sparing benign prostate tissue is emerging as a viable alternative to radical prostatectomy (RP) or radi...
- Published
- 2020
- Full Text
- View/download PDF
39. Co-Registration of ex vivo Surgical Histopathology and in vivo T2 weighted MRI of the Prostate via multi-scale spectral embedding representation
- Author
-
Satish Viswanath, Lin Li, Anant Madabhushi, Shivani Pahwa, Mirabela Rusu, Jay Gollamudi, and Gregory Penzias
- Subjects
Male ,medicine.medical_specialty ,Similarity (geometry) ,Computer science ,Science ,Scale-invariant feature transform ,Scale (descriptive set theory) ,02 engineering and technology ,Article ,030218 nuclear medicine & medical imaging ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Image Processing, Computer-Assisted ,Humans ,Computer vision ,Representation (mathematics) ,Multidisciplinary ,business.industry ,Prostate ,Pattern recognition ,Mutual information ,Independent component analysis ,Magnetic Resonance Imaging ,Embedding ,Medicine ,020201 artificial intelligence & image processing ,Histopathology ,Artificial intelligence ,business ,Algorithms - Abstract
Multi-modal image co-registration via optimizing mutual information (MI) is based on the assumption that intensity distributions of multi-modal images follow a consistent relationship. However, images with a substantial difference in appearance violate this assumption, thus MI directly based on image intensity alone may be inadequate to drive similarity based co-registration. To address this issue, we introduce a novel approach for multi-modal co-registration called Multi-scale Spectral Embedding Registration (MSERg). MSERg involves the construction of multi-scale spectral embedding (SE) representations from multimodal images via texture feature extraction, scale selection, independent component analysis (ICA) and SE to create orthogonal representations that decrease the dissimilarity between the fixed and moving images to facilitate better co-registration. To validate the MSERg method, we aligned 45 pairs of in vivo prostate MRI and corresponding ex vivo histopathology images. The dataset was split into a learning set and a testing set. In the learning set, length scales of 5 × 5, 7 × 7 and 17 × 17 were selected. In the independent testing set, we compared MSERg with intensity-based registration, multi-attribute combined mutual information (MACMI) registration and scale-invariant feature transform (SIFT) flow registration. Our results suggest that multi-scale SE representations generated by MSERg are found to be more appropriate for radiology-pathology co-registration.
- Published
- 2017
40. Co-registration of pre-operative CT with ex vivo surgically excised ground glass nodules to define spatial extent of invasive adenocarcinoma on in vivo imaging: a proof-of-concept study
- Author
-
Frank J. Jacono, Philip A. Linden, Rajat Thawani, Mirabela Rusu, Michael Yang, Anant Madabhushi, Prabhakar Rajiah, Christopher Donatelli, and Robert C. Gilkeson
- Subjects
Adult ,Male ,medicine.medical_specialty ,Lung Neoplasms ,Adenocarcinoma ,Proof of Concept Study ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,In vivo ,medicine ,Humans ,Neoplasm Invasiveness ,Radiology, Nuclear Medicine and imaging ,Aged ,Neuroradiology ,Aged, 80 and over ,medicine.diagnostic_test ,business.industry ,Ultrasound ,Histology ,Interventional radiology ,Nodule (medicine) ,General Medicine ,Middle Aged ,medicine.disease ,030220 oncology & carcinogenesis ,Multiple Pulmonary Nodules ,Female ,Radiology ,medicine.symptom ,Tomography, X-Ray Computed ,business ,Preclinical imaging - Abstract
To develop an approach for radiology-pathology fusion of ex vivo histology of surgically excised pulmonary nodules with pre-operative CT, to radiologically map spatial extent of the invasive adenocarcinomatous component of the nodule. Six subjects (age: 75 ± 11 years) with pre-operative CT and surgically excised ground-glass nodules (size: 22.5 ± 5.1 mm) with a significant invasive adenocarcinomatous component (>5 mm) were included. The pathologist outlined disease extent on digitized histology specimens; two radiologists and a pulmonary critical care physician delineated the entire nodule on CT (in-plane resolution
- Published
- 2017
- Full Text
- View/download PDF
41. CorrSigNet: Learning CORRelated Prostate Cancer SIGnatures from Radiology and Pathology Images for Improved Computer Aided Diagnosis
- Author
-
Arun Seetharaman, Wei Shao, Rewa Sood, Mirabela Rusu, Pejman Ghanouni, Indrani Bhattacharya, Nikola C. Teslovich, Richard E. Fan, Christian A. Kunder, Simon John Christoph Soerensen, Jeffrey B. Wang, James D. Brooks, and Geoffrey A. Sonn
- Subjects
medicine.medical_specialty ,Pathology ,medicine.diagnostic_test ,Prostatectomy ,business.industry ,medicine.medical_treatment ,030232 urology & nephrology ,Cancer ,Magnetic resonance imaging ,medicine.disease ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,0302 clinical medicine ,medicine.anatomical_structure ,Prostate ,Computer-aided diagnosis ,medicine ,Radiology ,Medical diagnosis ,business ,Feature learning - Abstract
Magnetic Resonance Imaging (MRI) is widely used for screening and staging prostate cancer. However, many prostate cancers have subtle features which are not easily identifiable on MRI, resulting in missed diagnoses and alarming variability in radiologist interpretation. Machine learning models have been developed in an effort to improve cancer identification, but current models localize cancer using MRI-derived features, while failing to consider the disease pathology characteristics observed on resected tissue. In this paper, we propose CorrSigNet, an automated two-step model that localizes prostate cancer on MRI by capturing the pathology features of cancer. First, the model learns MRI signatures of cancer that are correlated with corresponding histopathology features using Common Representation Learning. Second, the model uses the learned correlated MRI features to train a Convolutional Neural Network to localize prostate cancer. The histopathology images are used only in the first step to learn the correlated features. Once learned, these correlated features can be extracted from MRI of new patients (without histopathology or surgery) to localize cancer. We trained and validated our framework on a unique dataset of 75 patients with 806 slices who underwent MRI followed by prostatectomy surgery. We tested our method on an independent test set of 20 prostatectomy patients (139 slices, 24 cancerous lesions, 1.12M pixels) and achieved a per-pixel sensitivity of 0.81, specificity of 0.71, AUC of 0.86 and a per-lesion AUC of \(0.96 \pm 0.07\), outperforming the current state-of-the-art accuracy in predicting prostate cancer using MRI.
- Published
- 2020
- Full Text
- View/download PDF
42. Tumor cell phenotype and heterogeneity differences in IDH1 mutant vs wild-type gliomas
- Author
-
Jeff Kiefer, Mirabela Rusu, Andrew E Sloan, Sanghee Cho, Michael D. Prados, Anup Sood, Fiona Ginty, Shannon Schyberg, Leo J. Wolansky, Sean Richard Dinn, Jill S. Barnholtz-Sloan, Rebecca F. Halperin, Joanna J. Phillips, Winnie S. Liang, Jonathan Adkins, Sara Nasser, Sara A. Byron, Michael E. Berens, Maria I. Zavodszky, Karen Devine, Quinn T. Ostrom, Sarah J. Nelson, Elizabeth McDonough, Lori Cuyugan, Marta Couce, Seungchan Kim, and John Frederick Graf
- Subjects
IDH1 ,medicine.diagnostic_test ,Angiogenesis ,Cell ,Wild type ,Cancer ,Biology ,medicine.disease ,Immunofluorescence ,Phenotype ,medicine.anatomical_structure ,Glioma ,medicine ,Cancer research - Abstract
Glioma is recognized to be a highly heterogeneous CNS malignancy, whose diverse cellular composition and cellular interactions have not been well characterized. To gain new clinical- and biological-insights into the genetically-bifurcated IDH1 mutant (mt) vs wildtype (wt) forms of glioma, we integrated multiplexed immunofluorescence single cell data for 43 protein markers across cancer hallmarks, in addition to cell spatial metrics, genomic sequencing and magnetic resonance imaging (MRI) quantitative features. Molecular and spatial heterogeneity scores for angiogenesis and cell invasion differ between IDHmt and wt gliomas irrespective of prior treatment and tumor grade; these differences also persisted in the MR imaging features of peritumoral edema and contrast enhancement volumes. Longer overall survival for IDH1mt glioma patients may reflect generalized altered cellular, molecular, spatial heterogeneity which manifest in discernable radiological manifestations.
- Published
- 2019
- Full Text
- View/download PDF
43. PD60-05 AUTOMATED DETECTION OF PROSTATE CANCER ON MULTIPARAMETRIC MRI USING DEEP NEURAL NETWORKS TRAINED ON SPATIAL COORDINATES AND PATHOLOGY OF BIOPSY CORES
- Author
-
Sarir Ahmadi, Alan Thong, Andrew Y. Ng, Richard E. Fan, Nicholas Bien, Leo C Chen, Nancy Wang, Mirabela Rusu, Pranav Rajpurkar, Robin Cheong, James D. Brooks, and Geoffrey A. Sonn
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Urology ,Multiparametric MRI ,medicine.disease ,Prostate cancer ,Spatial reference system ,Biopsy ,medicine ,Deep neural networks ,Radiology ,Clinical care ,business - Abstract
INTRODUCTION AND OBJECTIVES:The role of multiparametric MRI in clinical care is rapidly expanding due to its ability to improve prostate cancer detection. However, MRI interpretation suffers from a...
- Published
- 2019
- Full Text
- View/download PDF
44. Anisotropic Super Resolution In Prostate Mri Using Super Resolution Generative Adversarial Networks
- Author
-
Rewa Sood and Mirabela Rusu
- Subjects
FOS: Computer and information sciences ,Structural similarity ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,030218 nuclear medicine & medical imaging ,Image (mathematics) ,03 medical and health sciences ,0302 clinical medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,medicine ,Computer vision ,Anisotropy ,medicine.diagnostic_test ,business.industry ,Image and Video Processing (eess.IV) ,Isotropy ,Resolution (electron density) ,Process (computing) ,Magnetic resonance imaging ,Electrical Engineering and Systems Science - Image and Video Processing ,Superresolution ,Computer Science::Computer Vision and Pattern Recognition ,Bicubic interpolation ,Artificial intelligence ,Enhanced Data Rates for GSM Evolution ,business ,030217 neurology & neurosurgery - Abstract
Acquiring High Resolution (HR) Magnetic Resonance (MR) images requires the patient to remain still for long periods of time, which causes patient discomfort and increases the probability of motion induced image artifacts. A possible solution is to acquire low resolution (LR) images and to process them with the Super Resolution Generative Adversarial Network (SRGAN) to create a super-resolved version. This work applies SRGAN to MR images of the prostate and performs three experiments. The first experiment explores improving the in-plane MR image resolution by factors of 4 and 8, and shows that, while the PSNR and SSIM (Structural SIMilarity) metrics are lower than the isotropic bicubic interpolation baseline, the SRGAN is able to create images that have high edge fidelity. The second experiment explores anisotropic super-resolution via synthetic images, in that the input images to the network are anisotropically downsampled versions of HR images. This experiment demonstrates the ability of the modified SRGAN to perform anisotropic super-resolution, with quantitative image metrics that are comparable to those of the anisotropic bicubic interpolation baseline. Finally, the third experiment applies a modified version of the SRGAN to super-resolve anisotropic images obtained from the through-plane slices of the volumetric MR data. The output super-resolved images contain a significant amount of high frequency information that make them visually close to their HR counterparts. Overall, the promising results from each experiment show that super-resolution for MR images is a successful technique and that producing isotropic MR image volumes from anisotropic slices is an achievable goal., International Symposium on Biomedical Imaging, 4 pages, 4 figures, 1 table
- Published
- 2019
- Full Text
- View/download PDF
45. Spatial integration of radiology and pathology images to characterize breast cancer aggressiveness on pre-surgical MRI
- Author
-
Robert B. West, Mirabela Rusu, and Bruce L. Daniel
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Radiography ,Magnetic resonance imaging ,Ductal carcinoma ,medicine.disease ,Spatial integration ,Breast cancer ,Biopsy ,medicine ,Mammography ,Histopathology ,Radiology ,skin and connective tissue diseases ,business - Abstract
The widespread use of screening mammography has resulted in a remarkable rise in the diagnosis of Ductal Carcinoma In Situ (DCIS). A resultant challenge is the early screening of these patients to identify those with concurrent invasive breast cancer (IBC), as one in five DCIS at biopsy, are upgraded to IBC following surgery. Both x-ray mammography and multi-parametric Magnetic Resonance Imaging (MRI) lack the ability to distinguish DCIS from IBC reliably. Our robust methodology for 3D alignment of histopathology images and MRI provides a unique opportunity to spatially map digitized histopathology slides on pre-surgical MRI which is particularly important in the tumors where DCIS and IBC co-occur as well as for the study of tumor heterogeneity. In this proof-of-concept study, we developed and evaluated a methodological framework for the 3D spatial alignment of MRI and histopathology slices, using x-ray radiographs as intermediate modality. Our methodology involves (1) the co-registration of 2D x-ray radiographs showing macrosections and corresponding 2D histology slices, (2) the 3D reconstruction of the ex vivo specimen based on the x-ray images, and aligned histology slices, and (3) the registration of the 3D reconstructed ex vivo specimen with the 3D MRI. The spatially co-registered MRI and histopathology images may enable the identification of MRI features that distinguish aggressive from indolent disease on in vivo MRI.
- Published
- 2019
- Full Text
- View/download PDF
46. Framework for the co-registration of MRI and histology images in prostate cancer patients with radical prostatectomy
- Author
-
James Brooks, Pejman Ghanouni, Mirabela Rusu, Christian A. Kunder, Richard E. Fan, Robert B. West, and Geoffrey A. Sonn
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Prostatectomy ,medicine.medical_treatment ,Co registration ,Magnetic resonance imaging ,Histology ,medicine.disease ,Prostate cancer ,medicine.anatomical_structure ,Prostate ,Biopsy ,medicine ,Radiology ,Radiation treatment planning ,business - Abstract
Prostate magnetic resonance imaging (MRI) allows the detection and treatment planning of clinically significant cancers. However, indolent cancers, e.g., those with Gleason scores 3+3, are not readily distinguishable on MRI. Thus an image-guided biopsy is still required before proceeding with a radical treatment for aggressive tumors or considering active surveillance for indolent disease. The excision of the prostate as part of radical prostatectomy treatments provides a unique opportunity to correlate whole-mount histology slices with MRI. Through a careful spatial alignment of histology slices and MRI, the extent of aggressive and indolent disease can be mapped on MRI which allows one to investigate MRI-derived features that might be able to distinguish aggressive from indolent cancers. Here, we introduce a framework for the 3D spatial integration of radiology and pathology images in the prostate. Our approach, first, uses groupwise-registration methods to reconstruct the histology specimen prior to sectioning, and incorporates the MRI as a spatial constraint, and, then, performs a multi-modal 3D affine and deformable alignment between the reconstructed histology specimen and the MRI. We tested our approach on 15 studies and found a Dice similarity coefficient of 0.94±0.02 and a urethra deviation of 1.11±0.34 mm between the histology reconstruction and the MRI. Our robust framework successfully mapped the extent of disease from histology slices on MRI and created ground truth labels for characterizing aggressive and indolent disease on MRI.
- Published
- 2019
- Full Text
- View/download PDF
47. Geodesic density regression for correcting 4DCT pulmonary respiratory motion artifacts
- Author
-
Yue Pan, Wei Shao, Mirabela Rusu, Gary E. Christensen, Oguz C. Durumeric, Joseph M. Reinhardt, and John E. Bayouth
- Subjects
Lung Neoplasms ,Geodesic ,Computer science ,Coordinate system ,Image registration ,Health Informatics ,Article ,030218 nuclear medicine & medical imaging ,Motion ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Four-Dimensional Computed Tomography ,Lung ,Artifact (error) ,Radiological and Ultrasound Technology ,business.industry ,Respiration ,Computer Graphics and Computer-Aided Design ,Intensity (physics) ,Jacobian matrix and determinant ,symbols ,Breathing ,Vector field ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Artifacts ,business ,Algorithms ,030217 neurology & neurosurgery - Abstract
Pulmonary respiratory motion artifacts are common in four-dimensional computed tomography (4DCT) of lungs and are caused by missing, duplicated, and misaligned image data. This paper presents a geodesic density regression (GDR) algorithm to correct motion artifacts in 4DCT by correcting artifacts in one breathing phase with artifact-free data from corresponding regions of other breathing phases. The GDR algorithm estimates an artifact-free lung template image and a smooth, dense, 4D (space plus time) vector field that deforms the template image to each breathing phase to produce an artifact-free 4DCT scan. Correspondences are estimated by accounting for the local tissue density change associated with air entering and leaving the lungs, and using binary artifact masks to exclude regions with artifacts from image regression. The artifact-free lung template image is generated by mapping the artifact-free regions of each phase volume to a common reference coordinate system using the estimated correspondences and then averaging. This procedure generates a fixed view of the lung with an improved signal-to-noise ratio. The GDR algorithm was evaluated and compared to a state-of-the-art geodesic intensity regression (GIR) algorithm using simulated CT time-series and 4DCT scans with clinically observed motion artifacts. The simulation shows that the GDR algorithm has achieved significantly more accurate Jacobian images and sharper template images, and is less sensitive to data dropout than the GIR algorithm. We also demonstrate that the GDR algorithm is more effective than the GIR algorithm for removing clinically observed motion artifacts in treatment planning 4DCT scans. Our code is freely available at https://github.com/Wei-Shao-Reg/GDR.
- Published
- 2021
- Full Text
- View/download PDF
48. Radiomics Analysis on FLT-PET/MRI for Characterization of Early Treatment Response in Renal Cell Carcinoma: A Proof-of-Concept Study
- Author
-
Satish Viswanath, Norbert Avril, Anant Madabhushi, Christopher J. Hoimes, Laia Valls, Mirabela Rusu, and Jacob Antunes
- Subjects
Original article ,Cancer Research ,medicine.diagnostic_test ,business.industry ,Sunitinib ,Radiography ,Standardized uptake value ,Magnetic resonance imaging ,lcsh:Neoplasms. Tumors. Oncology. Including cancer and carcinogens ,medicine.disease ,lcsh:RC254-282 ,3. Good health ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Text mining ,Oncology ,Positron emission tomography ,Renal cell carcinoma ,030220 oncology & carcinogenesis ,medicine ,Effective diffusion coefficient ,business ,Nuclear medicine ,medicine.drug - Abstract
Studying early response to cancer treatment is significant for patient treatment stratification and follow-up. Although recent advances in positron emission tomography (PET) and magnetic resonance imaging (MRI) allow for evaluation of tumor response, a quantitative objective assessment of treatment-related effects offers localization and quantification of structural and functional changes in the tumor region. Radiomics, the process of computerized extraction of features from radiographic images, is a new strategy for capturing subtle changes in the tumor region that works by quantifying subvisual patterns which might escape human identification. The goal of this study was to demonstrate feasibility for performing radiomics analysis on integrated PET/MRI to characterize early treatment response in metastatic renal cell carcinoma (RCC) undergoing sunitinib therapy. Two patients with advanced RCC were imaged using an integrated PET/MRI scanner. [18 F] fluorothymidine (FLT) was used as the PET radiotracer, which can measure the degree of cell proliferation. Image acquisitions included test/retest scans before sunitinib treatment and one scan 3 weeks into treatment using [18 F] FLT-PET, T2-weighted (T2w), and diffusion-weighted imaging (DWI) protocols, where DWI yielded an apparent diffusion coefficient (ADC) map. Our framework to quantitatively characterize treatment-related changes involved the following analytic steps: 1) intraacquisition and interacquisition registration of protocols to allow voxel-wise comparison of changes in radiomic features, 2) correction and pseudoquantification of T2w images to remove acquisition artifacts and examine tissue-specific response, 3) characterization of information captured by T2w MRI, FLT-PET, and ADC via radiomics, and 4) combining multiparametric information to create a map of integrated changes from PET/MRI radiomic features. Standardized uptake value (from FLT-PET) and ADC textures ranked highest for reproducibility in a test/retest evaluation as well as for capturing treatment response, in comparison to high variability seen in T2w MRI. The highest-ranked radiomic feature yielded a normalized percentage change of 63% within the RCC region and 17% in a spatially distinct normal region relative to its pretreatment value. By comparison, both the original and postprocessed T2w signal intensity appeared to be markedly less sensitive and specific to changes within the tumor. Our preliminary results thus suggest that radiomics analysis could be a powerful tool for characterizing treatment response in integrated PET/MRI.
- Published
- 2016
- Full Text
- View/download PDF
49. ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate
- Author
-
Mirabela Rusu, Nikhil Madhuripan, Geoffrey A. Sonn, Wei Shao, Anugayathri Jawahar, James D. Brooks, Nikola C. Teslovich, Simon John Christoph Soerensen, Christian A. Kunder, Pejman Ghanouni, Linda Banh, Jeffrey B. Wang, and Richard E. Fan
- Subjects
Male ,medicine.medical_specialty ,Computer science ,medicine.medical_treatment ,Histopathology ,Image registration ,Health Informatics ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Prostate cancer ,Deep Learning ,0302 clinical medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,medicine ,False positive paradox ,Humans ,Preprocessor ,Radiology, Nuclear Medicine and imaging ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Artificial neural network ,Prostatectomy ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Prostatic Neoplasms ,Magnetic resonance imaging ,Electrical Engineering and Systems Science - Image and Video Processing ,prostate cancer ,medicine.disease ,Magnetic Resonance Imaging ,Computer Graphics and Computer-Aided Design ,3. Good health ,radiology-pathology fusion ,Computer Vision and Pattern Recognition ,Radiology ,Artificial intelligence ,business ,Algorithms ,030217 neurology & neurosurgery ,MRI - Abstract
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet., Comment: Accepted to Medical Image Analysis (MedIA)
- Published
- 2021
- Full Text
- View/download PDF
50. An Application of Generative Adversarial Networks for Super Resolution Medical Imaging
- Author
-
Mirabela Rusu, Rohit Sood, Karthik Choutagunta, Binit Topiwala, and Rewa Sood
- Subjects
FOS: Computer and information sciences ,Structural similarity ,Computer science ,Mean opinion score ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Context (language use) ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer vision ,Image resolution ,medicine.diagnostic_test ,business.industry ,Resolution (electron density) ,Image and Video Processing (eess.IV) ,Magnetic resonance imaging ,Sparse approximation ,Electrical Engineering and Systems Science - Image and Video Processing ,Superresolution ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Acquiring High Resolution (HR) Magnetic Resonance (MR) images requires the patient to remain still for long periods of time, which causes patient discomfort and increases the probability of motion induced image artifacts. A possible solution is to acquire low resolution (LR) images and to process them with the Super Resolution Generative Adversarial Network (SRGAN) to create an HR version. Acquiring LR images requires a lower scan time than acquiring HR images, which allows for higher patient comfort and scanner throughput. This work applies SRGAN to MR images of the prostate to improve the in-plane resolution by factors of 4 and 8. The term 'super resolution' in the context of this paper defines the post processing enhancement of medical images as opposed to 'high resolution' which defines native image resolution acquired during the MR acquisition phase. We also compare the SRGAN to three other models: SRCNN, SRResNet, and Sparse Representation. While the SRGAN results do not have the best Peak Signal to Noise Ratio (PSNR) or Structural Similarity (SSIM) metrics, they are the visually most similar to the original HR images, as portrayed by the Mean Opinion Score (MOS) results., Comment: International Conference on Machine Learning Applications, 6 pages, 5 figures, 2 tables
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.