135 results on '"Ravikumar, Nishant"'
Search Results
2. Unsupervised ensemble-based phenotyping enhances discoverability of genes related to left-ventricular morphology
- Author
-
Bonazzola, Rodrigo, Ferrante, Enzo, Ravikumar, Nishant, Xia, Yan, Keavney, Bernard, Plein, Sven, Syeda-Mahmood, Tanveer, and Frangi, Alejandro F.
- Published
- 2024
- Full Text
- View/download PDF
3. Radiomics in the evaluation of ovarian masses — a systematic review
- Author
-
Adusumilli, Pratik, Ravikumar, Nishant, Hall, Geoff, Swift, Sarah, Orsi, Nicolas, and Scarsbrook, Andrew
- Published
- 2023
- Full Text
- View/download PDF
4. Artificial intelligence in ovarian cancer histopathology: a systematic review
- Author
-
Breen, Jack, Allen, Katie, Zucker, Kieran, Adusumilli, Pratik, Scarsbrook, Andrew, Hall, Geoff, Orsi, Nicolas M., and Ravikumar, Nishant
- Published
- 2023
- Full Text
- View/download PDF
5. Compressed sensing using a deep adaptive perceptual generative adversarial network for MRI reconstruction from undersampled K-space data
- Author
-
Wu, Kun, Xia, Yan, Ravikumar, Nishant, and Frangi, Alejandro F.
- Published
- 2024
- Full Text
- View/download PDF
6. Contribution of Shape Features to Intradiscal Pressure and Facets Contact Pressure in L4/L5 FSUs: An In-Silico Study
- Author
-
Kassab-Bachi, Amin, Ravikumar, Nishant, Wilcox, Ruth K., Frangi, Alejandro F., and Taylor, Zeike A.
- Published
- 2023
- Full Text
- View/download PDF
7. RecON: Online learning for sensorless freehand 3D ultrasound reconstruction
- Author
-
Luo, Mingyuan, Yang, Xin, Wang, Hongzhang, Dou, Haoran, Hu, Xindi, Huang, Yuhao, Ravikumar, Nishant, Xu, Songcheng, Zhang, Yuanji, Xiong, Yi, Xue, Wufeng, Frangi, Alejandro F., Ni, Dong, and Sun, Litao
- Published
- 2023
- Full Text
- View/download PDF
8. Virtual high-resolution MR angiography from non-angiographic multi-contrast MRIs: synthetic vascular model populations for in-silico trials
- Author
-
Xia, Yan, Ravikumar, Nishant, Lassila, Toni, and Frangi, Alejandro F.
- Published
- 2023
- Full Text
- View/download PDF
9. High-throughput 3DRA segmentation of brain vasculature and aneurysms using deep learning
- Author
-
Lin, Fengming, Xia, Yan, Song, Shuang, Ravikumar, Nishant, and Frangi, Alejandro F.
- Published
- 2023
- Full Text
- View/download PDF
10. Mitosis domain generalization in histopathology images — The MIDOG challenge
- Author
-
Aubreville, Marc, Stathonikos, Nikolas, Bertram, Christof A., Klopfleisch, Robert, ter Hoeve, Natalie, Ciompi, Francesco, Wilm, Frauke, Marzahl, Christian, Donovan, Taryn A., Maier, Andreas, Breen, Jack, Ravikumar, Nishant, Chung, Youjin, Park, Jinah, Nateghi, Ramin, Pourakpour, Fattaneh, Fick, Rutger H.J., Ben Hadj, Saima, Jahanifar, Mostafa, Shephard, Adam, Dexl, Jakob, Wittenberg, Thomas, Kondo, Satoshi, Lafarge, Maxime W., Koelzer, Viktor H., Liang, Jingtang, Wang, Yubo, Long, Xi, Liu, Jingxin, Razavi, Salar, Khademi, April, Yang, Sen, Wang, Xiyue, Erber, Ramona, Klang, Andrea, Lipnik, Karoline, Bolfa, Pompei, Dark, Michael J., Wasinger, Gabriel, Veta, Mitko, and Breininger, Katharina
- Published
- 2023
- Full Text
- View/download PDF
11. Predicting myocardial infarction through retinal scans and minimal personal information
- Author
-
Diaz-Pinto, Andres, Ravikumar, Nishant, Attar, Rahman, Suinesiaputra, Avan, Zhao, Yitian, Levelt, Eylem, Dall’Armellina, Erica, Lorenzi, Marco, Chen, Qingyu, Keenan, Tiarnan D. L., Agrón, Elvira, Chew, Emily Y., Lu, Zhiyong, Gale, Chris P., Gale, Richard P., Plein, Sven, and Frangi, Alejandro F.
- Published
- 2022
- Full Text
- View/download PDF
12. Multi-centre benchmarking of deep learning models for COVID-19 detection in chest x-rays.
- Author
-
Harkness, Rachael, Frangi, Alejandro F., Zucker, Kieran, and Ravikumar, Nishant
- Published
- 2024
- Full Text
- View/download PDF
13. Correction: Contribution of Shape Features to Intradiscal Pressure and Facets Contact Pressure in L4/L5 FSUs: An In-Silico Study
- Author
-
Kassab-Bachi, Amin, Ravikumar, Nishant, Wilcox, Ruth K., Frangi, Alejandro F., and Taylor, Zeike A.
- Published
- 2023
- Full Text
- View/download PDF
14. A probabilistic framework for statistical shape models and atlas construction : application to neuroimaging
- Author
-
Ravikumar, Nishant, Taylor, Zeike A., and Frangi, Alejandro F.
- Subjects
616.8 - Abstract
Accurate and reliable registration of shapes and multi-dimensional point sets describing the morphology/physiology of anatomical structures is a pre-requisite for constructing statistical shape models (SSMs) and atlases. Such statistical descriptions of variability across populations (regarding shape or other morphological/physiological quantities) are based on homologous correspondences across the multiple samples that comprise the training data. The notion of exact correspondence can be ambiguous when these data contain noise and outliers, missing data, or significant and abnormal variations due to pathology. But, these phenomena are common in medical image-derived data, due, for example, to inconsistencies in image quality and acquisition protocols, presence of motion artefacts, differences in pre-processing steps, and inherent variability across patient populations and demographics. This thesis therefore focuses on formulating a unified probabilistic framework for the registration of shapes and so-called \textit{generalised point sets}, which is robust to the anomalies and variations described. Statistical analysis of shapes across large cohorts demands automatic generation of training sets (image segmentations delineating the structure of interest), as manual and semi-supervised approaches can be prohibitively time consuming. However, automated segmentation and landmarking of images often result in shapes with high levels of outliers and missing data. Consequently, a robust method for registration and correspondence estimation is required. A probabilistic group-wise registration framework for point-based representations of shapes, based on Student’s t-mixture model (TMM) and a multi-resolution extension to the same (mrTMM), are formulated to this end. The frameworks exploit the inherent robustness of Student’s t-distributions to outliers, which is lacking in existing Gaussian mixture model (GMM)-based approaches. The registration accuracy of the proposed approaches was quantitatively evaluated and shown to outperform the state-of-the-art, using synthetic and clinical data. A corresponding improvement in the quality of SSMs generated subsequently was also shown, particularly for data sets containing high levels of noise. In general, the proposed approach requires fewer user specified parameters than existing methods, whilst affording much improved robustness to outliers. Registration of generalised point sets, which combine disparate features such as spatial positions, directional/axial data, and scalar-valued quantities, was studied next. A hybrid mixture model (HMM), combining different types of probability distributions, was formulated to facilitate the joint registration and clustering of multi-dimensional point sets of this nature. Two variants of the HMM were developed for modelling: (1) axial data; and (2) directional data. The former, based on a combination of Student’s t, Watson and Gaussian distributions, was used to register hybrid point sets comprising magnetic resonance diffusion tensor image (DTI)-derived quantities, such as voxel spatial positions (defining a region/structure of interest), associated fibre orientations, and scalar measures reflecting tissue anisotropy. The latter meanwhile, formulated using a combination of Student’s t and Von-Mises-Fisher distributions, is used for the registration of shapes represented as hybrid point sets comprising spatial positions and associated surface normal vectors. The Watson-variant of the HMM facilitates statistical analysis and group-wise comparisons of DTI data across patient populations, presented as an exemplar application of the proposed approach. The Fisher-variant of the HMM on the other hand, was used to register hybrid representations of shapes, providing substantial improvements over point-based registration approaches in terms of anatomical validity in the estimated correspondences.
- Published
- 2017
15. Deep action learning enables robust 3D segmentation of body organs in various CT and MRI images
- Author
-
Zhong, Xia, Amrehn, Mario, Ravikumar, Nishant, Chen, Shuqing, Strobel, Norbert, Birkhold, Annette, Kowarschik, Markus, Fahrig, Rebecca, and Maier, Andreas
- Published
- 2021
- Full Text
- View/download PDF
16. Registration of vascular structures using a hybrid mixture model
- Author
-
Bayer, Siming, Zhai, Zhiwei, Strumia, Maddalena, Tong, Xiaoguang, Gao, Ying, Staring, Marius, Stoel, Berend, Fahrig, Rebecca, Nabavi, Arya, Maier, Andreas, and Ravikumar, Nishant
- Published
- 2019
- Full Text
- View/download PDF
17. Generalised coherent point drift for group-wise multi-dimensional analysis of diffusion brain MRI data
- Author
-
Ravikumar, Nishant, Gooya, Ali, Beltrachini, Leandro, Frangi, Alejandro F., and Taylor, Zeike A.
- Published
- 2019
- Full Text
- View/download PDF
18. Beyond images: an integrative multi-modal approach to chest x-ray report generation.
- Author
-
Aksoy, Nurbanu, Sharoff, Serge, Baser, Selcuk, Ravikumar, Nishant, and Frangi, Alejandro F.
- Published
- 2024
- Full Text
- View/download PDF
19. Concurrent Left Ventricular Myocardial Diffuse Fibrosis and Left Atrial Dysfunction Strongly Predict Incident Heart Failure
- Author
-
Wong, Mark Y.Z., Vargas, Jose D., Naderi, Hafiz, Sanghvi, Mihir M., Raisi-Estabragh, Zahra, Suinesiaputra, Avan, Bonazzola, Rodrigo, Attar, Rahman, Ravikumar, Nishant, Hann, Evan, Neubauer, Stefan, Piechnik, Stefan K., Frangi, Alejandro F., Petersen, Steffen E., and Aung, Nay
- Published
- 2024
- Full Text
- View/download PDF
20. Group-wise similarity registration of point sets using Student’s t-mixture model for statistical shape models
- Author
-
Ravikumar, Nishant, Gooya, Ali, Çimen, Serkan, Frangi, Alejandro F., and Taylor, Zeike A.
- Published
- 2018
- Full Text
- View/download PDF
21. Chapter 16 - Deep learning fundamentals
- Author
-
Ravikumar, Nishant, Zakeri, Arezoo, Xia, Yan, and Frangi, Alejandro F.
- Published
- 2024
- Full Text
- View/download PDF
22. Chapter 17 - Deep learning for vision and representation learning
- Author
-
Zakeri, Arezoo, Xia, Yan, Ravikumar, Nishant, and Frangi, Alejandro F.
- Published
- 2024
- Full Text
- View/download PDF
23. Mapping Ensembles of Trees to Sparse, Interpretable Multilayer Perceptron Networks
- Author
-
Rodríguez-Salas, Dalia, Mürschberger, Nina, Ravikumar, Nishant, Seuret, Mathias, and Maier, Andreas
- Published
- 2020
- Full Text
- View/download PDF
24. Hemodynamics of thrombus formation in intracranial aneurysms: An in silico observational study.
- Author
-
Liu, Qiongyao, Sarrami-Foroushani, Ali, Wang, Yongxing, MacRaild, Michael, Kelly, Christopher, Lin, Fengming, Xia, Yan, Song, Shuang, Ravikumar, Nishant, Patankar, Tufail, Taylor, Zeike A., Lassila, Toni, and Frangi, Alejandro F.
- Subjects
INTRACRANIAL aneurysms ,THROMBOSIS ,HYPERTENSION ,HEMODYNAMICS ,MULTISCALE modeling - Abstract
How prevalent is spontaneous thrombosis in a population containing all sizes of intracranial aneurysms? How can we calibrate computational models of thrombosis based on published data? How does spontaneous thrombosis differ in normo- and hypertensive subjects? We address the first question through a thorough analysis of published datasets that provide spontaneous thrombosis rates across different aneurysm characteristics. This analysis provides data for a subgroup of the general population of aneurysms, namely, those of large and giant size (>10 mm). Based on these observed spontaneous thrombosis rates, our computational modeling platform enables the first in silico observational study of spontaneous thrombosis prevalence across a broader set of aneurysm phenotypes. We generate 109 virtual patients and use a novel approach to calibrate two trigger thresholds: residence time and shear rate, thus addressing the second question. We then address the third question by utilizing this calibrated model to provide new insight into the effects of hypertension on spontaneous thrombosis. We demonstrate how a mechanistic thrombosis model calibrated on an intracranial aneurysm cohort can help estimate spontaneous thrombosis prevalence in a broader aneurysm population. This study is enabled through a fully automatic multi-scale modeling pipeline. We use the clinical spontaneous thrombosis data as an indirect population-level validation of a complex computational modeling framework. Furthermore, our framework allows exploration of the influence of hypertension in spontaneous thrombosis. This lays the foundation for in silico clinical trials of cerebrovascular devices in high-risk populations, e.g., assessing the performance of flow diverters in aneurysms for hypertensive patients. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. An Automated Method for Artifical Intelligence Assisted Diagnosis of Active Aortitis Using Radiomic Analysis of FDG PET-CT Images
- Author
-
Duff, Lisa M, Scarsbrook, Andrew F, Ravikumar, Nishant, Frood, Russell, Van Praagh, Gijs D, Mackie, Sarah L, Bailey, Marc A, Tarkin, Jason M, Mason, Justin C, Van Der Geest, Kornelis SM, Slart, Riemer HJA, Morgan, Ann W, Tsoumpas, Charalampos, Cardiovascular Centre (CVC), Translational Immunology Groningen (TRIGR), Basic and Translational Research and Imaging Methodology Development in Groningen (BRIDGE), Guided Treatment in Optimal Selected Cancer Patients (GUTS), Duff, Lisa M [0000-0002-4295-6356], Scarsbrook, Andrew F [0000-0002-4243-032X], Frood, Russell [0000-0003-2681-9922], van Praagh, Gijs D [0000-0002-4396-637X], Slart, Riemer HJA [0000-0002-5565-1164], Tsoumpas, Charalampos [0000-0002-4971-2477], and Apollo - University of Cambridge Repository
- Subjects
machine learning ,ROC Curve ,Fluorodeoxyglucose F18 ,radiomics ,Positron Emission Tomography Computed Tomography ,Humans ,convolutional neural network ,positron emission tomography/computed tomography ,Radiopharmaceuticals ,Molecular Biology ,Biochemistry ,aortitis - Abstract
Peer reviewed: True, The aim of this study was to develop and validate an automated pipeline that could assist the diagnosis of active aortitis using radiomic imaging biomarkers derived from [18F]-Fluorodeoxyglucose Positron Emission Tomography-Computed Tomography (FDG PET-CT) images. The aorta was automatically segmented by convolutional neural network (CNN) on FDG PET-CT of aortitis and control patients. The FDG PET-CT dataset was split into training (43 aortitis:21 control), test (12 aortitis:5 control) and validation (24 aortitis:14 control) cohorts. Radiomic features (RF), including SUV metrics, were extracted from the segmented data and harmonized. Three radiomic fingerprints were constructed: A-RFs with high diagnostic utility removing highly correlated RFs; B used principal component analysis (PCA); C-Random Forest intrinsic feature selection. The diagnostic utility was evaluated with accuracy and area under the receiver operating characteristic curve (AUC). Several RFs and Fingerprints had high AUC values (AUC > 0.8), confirmed by balanced accuracy, across training, test and external validation datasets. Good diagnostic performance achieved across several multi-centre datasets suggests that a radiomic pipeline can be generalizable. These findings could be used to build an automated clinical decision tool to facilitate objective and standardized assessment regardless of observer experience.
- Published
- 2023
26. Unsupervised ensemble-based phenotyping helps enhance the discoverability of genes related to heart morphology
- Author
-
Bonazzola, Rodrigo, Ferrante, Enzo, Ravikumar, Nishant, Xia, Yan, Keavney, Bernard, Plein, Sven, Syeda-Mahmood, Tanveer, and Frangi, Alejandro F
- Subjects
Genomics (q-bio.GN) ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,FOS: Biological sciences ,Quantitative Biology - Genomics ,Machine Learning (cs.LG) - Abstract
Recent genome-wide association studies (GWAS) have been successful in identifying associations between genetic variants and simple cardiac parameters derived from cardiac magnetic resonance (CMR) images. However, the emergence of big databases including genetic data linked to CMR, facilitates investigation of more nuanced patterns of shape variability. Here, we propose a new framework for gene discovery entitled Unsupervised Phenotype Ensembles (UPE). UPE builds a redundant yet highly expressive representation by pooling a set of phenotypes learned in an unsupervised manner, using deep learning models trained with different hyperparameters. These phenotypes are then analyzed via (GWAS), retaining only highly confident and stable associations across the ensemble. We apply our approach to the UK Biobank database to extract left-ventricular (LV) geometric features from image-derived three-dimensional meshes. We demonstrate that our approach greatly improves the discoverability of genes influencing LV shape, identifying 11 loci with study-wide significance and 8 with suggestive significance. We argue that our approach would enable more extensive discovery of gene associations with image-derived phenotypes for other organs or image modalities., 14 pages of main text, 22 pages of supplemental information
- Published
- 2023
27. A Generative Shape Compositional Framework: Towards Representative Populations of Virtual Heart Chimaeras
- Author
-
Dou, Haoran, Virtanen, Seppo, Ravikumar, Nishant, and Frangi, Alejandro F.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Generating virtual populations of anatomy that capture sufficient variability while remaining plausible is essential for conducting in-silico trials of medical devices. However, not all anatomical shapes of interest are always available for each individual in a population. Hence, missing/partially-overlapping anatomical information is often available across individuals in a population. We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets. The proposed generative model can synthesise complete whole complex shape assemblies coined virtual chimaeras, as opposed to natural human chimaeras. We applied this framework to build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures. Specifically, we propose a generative shape compositional framework which comprises two components - a part-aware generative shape model which captures the variability in shape observed for each structure of interest in the training population; and a spatial composition network which assembles/composes the structures synthesised by the former into multi-part shape assemblies (viz. virtual chimaeras). We also propose a novel self supervised learning scheme that enables the spatial composition network to be trained with partially overlapping data and weak labels. We trained and validated our approach using shapes of cardiac structures derived from cardiac magnetic resonance images available in the UK Biobank. Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity. This demonstrates the superiority of the proposed approach as the synthesised cardiac virtual populations are more plausible and capture a greater degree of variability in shape than those generated by the PCA-based shape model., 15 pages, 4 figures
- Published
- 2022
28. Agent with Tangent-based Formulation and Anatomical Perception for Standard Plane Localization in 3D Ultrasound
- Author
-
Zou, Yuxin, Dou, Haoran, Huang, Yuhao, Yang, Xin, Qian, Jikuan, Zhen, Chaojiong, Ji, Xiaodan, Ravikumar, Nishant, Chen, Guoqiang, Huang, Weijun, Frangi, Alejandro F., and Ni, Dong
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Standard plane (SP) localization is essential in routine clinical ultrasound (US) diagnosis. Compared to 2D US, 3D US can acquire multiple view planes in one scan and provide complete anatomy with the addition of coronal plane. However, manually navigating SPs in 3D US is laborious and biased due to the orientation variability and huge search space. In this study, we introduce a novel reinforcement learning (RL) framework for automatic SP localization in 3D US. Our contribution is three-fold. First, we formulate SP localization in 3D US as a tangent-point-based problem in RL to restructure the action space and significantly reduce the search space. Second, we design an auxiliary task learning strategy to enhance the model's ability to recognize subtle differences crossing Non-SPs and SPs in plane search. Finally, we propose a spatial-anatomical reward to effectively guide learning trajectories by exploiting spatial and anatomical information simultaneously. We explore the efficacy of our approach on localizing four SPs on uterus and fetal brain datasets. The experiments indicate that our approach achieves a high localization accuracy as well as robust performance., Accepted by MICCAI 2022
- Published
- 2022
29. Localizing the Recurrent Laryngeal Nerve via Ultrasound with a Bayesian Shape Framework
- Author
-
Dou, Haoran, Han, Luyi, He, Yushuang, Xu, Jun, Ravikumar, Nishant, Mann, Ritse, Frangi, Alejandro F., Yap, Pew-Thian, and Huang, Yunzhi
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Tumor infiltration of the recurrent laryngeal nerve (RLN) is a contraindication for robotic thyroidectomy and can be difficult to detect via standard laryngoscopy. Ultrasound (US) is a viable alternative for RLN detection due to its safety and ability to provide real-time feedback. However, the tininess of the RLN, with a diameter typically less than 3mm, poses significant challenges to the accurate localization of the RLN. In this work, we propose a knowledge-driven framework for RLN localization, mimicking the standard approach surgeons take to identify the RLN according to its surrounding organs. We construct a prior anatomical model based on the inherent relative spatial relationships between organs. Through Bayesian shape alignment (BSA), we obtain the candidate coordinates of the center of a region of interest (ROI) that encloses the RLN. The ROI allows a decreased field of view for determining the refined centroid of the RLN using a dual-path identification network, based on multi-scale semantic information. Experimental results indicate that the proposed method achieves superior hit rates and substantially smaller distance errors compared with state-of-the-art methods., Early Accepted by MICCAI 2022
- Published
- 2022
30. Chapter 17 - Image imputation in cardiac MRI and quality assessment
- Author
-
Xia, Yan, Ravikumar, Nishant, and Frangi, Alejandro F.
- Published
- 2022
- Full Text
- View/download PDF
31. Contributors
- Author
-
Aja-Fernández, Santiago, Alberola-López, Carlos, Altmann, Andre, Bano, Sophia, Christlein, Vincent, Cong, Shan, Conjeti, Sailesh, Cootes, Tim, Curiale, Ariel H., de Bruijne, Marleen, Demirci, Stefanie, de Vos, Bob D., Duncan, James S., Frangi, Alejandro F., Graham, Simon, Heinrich, Mattias, Išgum, Ivana, Lorenzi, Marco, Lu, Cheng, Madabhushi, Anant, Maier, Andreas, Martin, Melissa, Moreau, Thomas, Mullan, Sean, Oguz, Ipek, Paragios, Nikos, Petersen, Jens, Prince, Jerry L., Rajpoot, Nasir, Ramos-Llordén, Gabriel, Ravikumar, Nishant, Schnabel, Julia, Shen, Li, Shinohara, Russell T., Sokooti, Hessam, Sonka, Milan, Sotiras, Aristeidis, Sporring, Jon, Staib, Lawrence H., Staring, Marius, Stoyanov, Danail, Unberath, Mathias, Vegas Sánchez-Ferrero, Gonzalo, Wassermann, Demian, Xia, Yan, Yushkevich, Paul A., Zakeri, Arezoo, Zhang, Honghai, Zhang, Lichun, and Zhang, Miaomiao
- Published
- 2024
- Full Text
- View/download PDF
32. Flip Learning: Erase to Segment
- Author
-
Huang, Yuhao, Yang, Xin, Zou, Yuxin, Chen, Chaoyu, Wang, Jian, Dou, Haoran, Ravikumar, Nishant, Frangi, Alejandro F, Zhou, Jianqiao, and Ni, Dong
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Multiagent Systems ,Machine Learning (cs.LG) ,Multiagent Systems (cs.MA) - Abstract
Nodule segmentation from breast ultrasound images is challenging yet essential for the diagnosis. Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation. Unlike existing weakly-supervised approaches, in this study, we propose a novel and general WSS framework called Flip Learning, which only needs the box annotation. Specifically, the target in the label box will be erased gradually to flip the classification tag, and the erased region will be considered as the segmentation result finally. Our contribution is three-fold. First, our proposed approach erases on superpixel level using a Multi-agent Reinforcement Learning framework to exploit the prior boundary knowledge and accelerate the learning process. Second, we design two rewards: classification score and intensity distribution reward, to avoid under- and over-segmentation, respectively. Third, we adopt a coarse-to-fine learning strategy to reduce the residual errors and improve the segmentation performance. Extensively validated on a large dataset, our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning., Accepted by MICCAI 2021
- Published
- 2021
33. Fed-Sim: Federated Simulation for Medical Imaging
- Author
-
Li, Daiqing, Kar, Amlan, Ravikumar, Nishant, Frangi, Alejandro F, and Fidler, Sanja
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Labelling data is expensive and time consuming especially for domains such as medical imaging that contain volumetric imaging data and require expert knowledge. Exploiting a larger pool of labeled data available across multiple centers, such as in federated learning, has also seen limited success since current deep learning approaches do not generalize well to images acquired with scanners from different manufacturers. We aim to address these problems in a common, learning-based image simulation framework which we refer to as Federated Simulation. We introduce a physics-driven generative approach that consists of two learnable neural modules: 1) a module that synthesizes 3D cardiac shapes along with their materials, and 2) a CT simulator that renders these into realistic 3D CT Volumes, with annotations. Since the model of geometry and material is disentangled from the imaging sensor, it can effectively be trained across multiple medical centers. We show that our data synthesis framework improves the downstream segmentation performance on several datasets. Project Page: https://nv-tlabs.github.io/fed-sim/ ., MICCAI 2020 (Early Accept)
- Published
- 2020
34. Partially Conditioned Generative Adversarial Networks
- Author
-
Ibarrola, Francisco J., Ravikumar, Nishant, and Frangi, Alejandro F.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,68T07 ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Generative models are undoubtedly a hot topic in Artificial Intelligence, among which the most common type is Generative Adversarial Networks (GANs). These architectures let one synthesise artificial datasets by implicitly modelling the underlying probability distribution of a real-world training dataset. With the introduction of Conditional GANs and their variants, these methods were extended to generating samples conditioned on ancillary information available for each sample within the dataset. From a practical standpoint, however, one might desire to generate data conditioned on partial information. That is, only a subset of the ancillary conditioning variables might be of interest when synthesising data. In this work, we argue that standard Conditional GANs are not suitable for such a task and propose a new Adversarial Network architecture and training strategy to deal with the ensuing problems. Experiments illustrating the value of the proposed approach in digit and face image synthesis under partial conditioning information are presented, showing that the proposed method can effectively outperform the standard approach under these circumstances., 10 pages, 9 figures
- Published
- 2020
35. A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced Cardiac Magnetic Resonance Imaging
- Author
-
Xiong, Zhaohan, Xia, Qing, Hu, Zhiqiang, Huang, Ning, Bian, Cheng, Zheng, Yefeng, Vesal, Sulaiman, Ravikumar, Nishant, Maier, Andreas, Yang, Xin, Heng, Pheng-Ann, Ni, Dong, Li, Caizi, Tong, Qianqian, Si, Weixin, Puybareau, Elodie, Khoudli, Younes, Geraud, Thierry, Chen, Chen, Bai, Wenjia, Rueckert, Daniel, Xu, Lingchao, Zhuang, Xiahai, Luo, Xinzhe, Jia, Shuman, Sermesant, Maxime, Liu, Yashu, Wang, Kuanquan, Borra, Davide, Masci, Alessandro, Corsi, Cristiana, de Vente, Coen, Veta, Mitko, Karim, Rashed, Preetha, Chandrakanth Jayachandran, Engelhardt, Sandy, Qiao, Menyun, Wang, Yuanyuan, Tao, Qian, Nunez-Garcia, Marta, Camara, Oscar, Savioli, Nicolo, Lamata, Pablo, and Zhao, Jichao
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (stat.ML) ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Segmentation of cardiac images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) widely used for visualizing diseased cardiac structures, is a crucial first step for clinical diagnosis and treatment. However, direct segmentation of LGE-MRIs is challenging due to its attenuated contrast. Since most clinical studies have relied on manual and labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the "2018 Left Atrium Segmentation Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double, sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved far superior results than traditional methods and pipelines containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for cardiac LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field.
- Published
- 2020
36. COPD Classification in CT Images Using a 3D Convolutional Neural Network
- Author
-
Ahmed, Jalil, Vesal, Sulaiman, Durlak, Felix, Kaergel, Rainer, Ravikumar, Nishant, Remy-Jardin, Martine, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,respiratory tract diseases - Abstract
Chronic obstructive pulmonary disease (COPD) is a lung disease that is not fully reversible and one of the leading causes of morbidity and mortality in the world. Early detection and diagnosis of COPD can increase the survival rate and reduce the risk of COPD progression in patients. Currently, the primary examination tool to diagnose COPD is spirometry. However, computed tomography (CT) is used for detecting symptoms and sub-type classification of COPD. Using different imaging modalities is a difficult and tedious task even for physicians and is subjective to inter-and intra-observer variations. Hence, developing meth-ods that can automatically classify COPD versus healthy patients is of great interest. In this paper, we propose a 3D deep learning approach to classify COPD and emphysema using volume-wise annotations only. We also demonstrate the impact of transfer learning on the classification of emphysema using knowledge transfer from a pre-trained COPD classification model.
- Published
- 2020
37. Analyzing an Imitation Learning Network for Fundus Image Registration Using a Divide-and-Conquer Approach
- Author
-
Bayer, Siming, Zhong, Xia, Fu, Weilin, Ravikumar, Nishant, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Comparison of microvascular circulation on fundoscopic images is a non-invasive clinical indication for the diagnosis and monitoring of diseases, such as diabetes and hypertensions. The differences between intra-patient images can be assessed quantitatively by registering serial acquisitions. Due to the variability of the images (i.e. contrast, luminosity) and the anatomical changes of the retina, the registration of fundus images remains a challenging task. Recently, several deep learning approaches have been proposed to register fundus images in an end-to-end fashion, achieving remarkable results. However, the results are difficult to interpret and analyze. In this work, we propose an imitation learning framework for the registration of 2D color funduscopic images for a wide range of applications such as disease monitoring, image stitching and super-resolution. We follow a divide-and-conquer approach to improve the interpretability of the proposed network, and analyze both the influence of the input image and the hyperparameters on the registration result. The results show that the proposed registration network reduces the initial target registration error up to 95\%., 6 pages, 2 figures
- Published
- 2019
38. Coronary Artery Plaque Characterization from CCTA Scans using Deep Learning and Radiomics
- Author
-
Denzinger, Felix, Wels, Michael, Ravikumar, Nishant, Breininger, Katharina, Reidelsh��fer, Anika, Eckert, Joachim, S��hling, Michael, Schmermund, Axel, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (stat.ML) ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Assessing coronary artery plaque segments in coronary CT angiography scans is an important task to improve patient management and clinical outcomes, as it can help to decide whether invasive investigation and treatment are necessary. In this work, we present three machine learning approaches capable of performing this task. The first approach is based on radiomics, where a plaque segmentation is used to calculate various shape-, intensity- and texture-based features under different image transformations. A second approach is based on deep learning and relies on centerline extraction as sole prerequisite. In the third approach, we fuse the deep learning approach with radiomic features. On our data the methods reached similar scores as simulated fractional flow reserve (FFR) measurements, which - in contrast to our methods - requires an exact segmentation of the whole coronary tree and often time-consuming manual interaction. In literature, the performance of simulated FFR reaches an AUC between 0.79-0.93 predicting an abnormal invasive FFR that demands revascularization. The radiomics approach achieves an AUC of 0.86, the deep learning approach 0.84 and the combined method 0.88 for predicting the revascularization decision directly. While all three proposed methods can be determined within seconds, the FFR simulation typically takes several minutes. Provided representative training data in sufficient quantities, we believe that the presented methods can be used to create systems for fully automatic non-invasive risk assessment for a variety of adverse cardiac events., International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2019
- Published
- 2019
39. The Pitfalls of Using Open Data to Develop Deep Learning Solutions for COVID-19 Detection in hest X- ays.
- Author
-
Harkness, Rachael, Hall, Geoff, Frangi, Alejandro F., Ravikumar, Nishant, and Zucker, Kieran
- Abstract
Since the emergence of COVID-19, deep learning models have been developed to identify COVID-19 from chest X-rays. With little to no direct access to hospital data, the AI community relies heavily on public data comprising numerous data sources. Model performance results have been exceptional when training and testing on open-source data, surpassing the reported capabilities of AI in pneumonia-detection prior to the COVID-19 outbreak. In this study impactful models are trained on a widely used open-source data and tested on an external test set and a hospital dataset, for the task of classifying chest X-rays into one of three classes: COVID-19, non-COVID pneumonia and no-pneumonia. Classification performance of the models investigated is evaluated through ROC curves, confusion matrices and standard classification metrics. Explainability modules are implemented to explore the image features most important to classification. Data analysis and model evalutions show that the popular open-source dataset COVIDx is not representative of the real clinical problem and that results from testing on this are inflated. Dependence on open-source data can leave models vulnerable to bias and confounding variables, requiring careful analysis to develop clinically useful/viable AI tools for COVID-19 detection in chest X-rays. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. A Divide-and-Conquer Approach towards Understanding Deep Networks
- Author
-
Fu, Weilin, Breininger, Katharina, Schaffert, Roman, Ravikumar, Nishant, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Deep neural networks have achieved tremendous success in various fields including medical image segmentation. However, they have long been criticized for being a black-box, in that interpretation, understanding and correcting architectures is difficult as there is no general theory for deep neural network design. Previously, precision learning was proposed to fuse deep architectures and traditional approaches. Deep networks constructed in this way benefit from the original known operator, have fewer parameters, and improved interpretability. However, they do not yield state-of-the-art performance in all applications. In this paper, we propose to analyze deep networks using known operators, by adopting a divide-and-conquer strategy to replace network components, whilst retaining its performance. The task of retinal vessel segmentation is investigated for this purpose. We start with a high-performance U-Net and show by step-by-step conversion that we are able to divide the network into modules of known operators. The results indicate that a combination of a trainable guided filter and a trainable version of the Frangi filter yields a performance at the level of U-Net (AUC 0.974 vs. 0.972) with a tremendous reduction in parameters (111,536 vs. 9,575). In addition, the trained layers can be mapped back into their original algorithmic interpretation and analyzed using standard tools of signal processing., This paper is accepted in MICCAI 2019
- Published
- 2019
41. Determination of Forming Limits in Sheet Metal Forming Using Deep Learning
- Author
-
Jaremenko, Christian, Ravikumar, Nishant, Affronti, Emanuela, Merklein, Marion, and Maier, Andreas
- Subjects
lcsh:QH201-278.5 ,lcsh:T ,pattern recognition ,Technische Fakultät ,sheet metal forming ,deep learning ,lcsh:Technology ,Article ,machine learning ,forming limit curve ,lcsh:TA1-2040 ,lcsh:Descriptive and experimental mechanics ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:Engineering (General). Civil engineering (General) ,lcsh:Microscopy ,lcsh:TK1-9971 ,ddc:600 ,lcsh:QC120-168.85 - Abstract
The forming limit curve (FLC) is used to model the onset of sheet metal instability during forming processes e.g., in the area of finite element analysis, and is usually determined by evaluation of strain distributions, derived with optical measurement systems during Nakajima tests. Current methods comprise of the standardized DIN EN ISO 12004-2 or time-dependent approaches that heuristically limit the evaluation area to a fraction of the available information and show weaknesses in the context of brittle materials without a pronounced necking phase. To address these limitations, supervised and unsupervised pattern recognition methods were introduced recently. However, these approaches are still dependent on prior knowledge, time, and localization information. This study overcomes these limitations by adopting a Siamese convolutional neural network (CNN), as a feature extractor. Suitable features are automatically learned using the extreme cases of the homogeneous and inhomogeneous forming phase in a supervised setup. Using robust Student&rsquo, s t mixture models, the learned features are clustered into three distributions in an unsupervised manner that cover the complete forming process. Due to the location and time independency of the method, the knowledge learned from formed specimen up until fracture can be transferred on to other forming processes that were prematurely stopped and assessed using metallographic examinations, enabling probabilistic cluster membership assignments for each frame of the forming sequence. The generalization of the method to unseen materials is evaluated in multiple experiments, and additionally tested on an aluminum alloy AA5182, which is characterized by Portevin-LE Chatlier effects.
- Published
- 2019
42. Maximum Likelihood Estimation of Head Motion using Epipolar Consistency
- Author
-
Preuhs, Alexander, Ravikumar, Nishant, Manhart, Michael, Stimpel, Bernhard, Hoppe, Elisabeth, Syben, Christopher, Kowarschik, Markus, and Maier, Andreas
- Subjects
FOS: Physical sciences ,Medical Physics (physics.med-ph) ,Physics - Medical Physics - Abstract
Open gantry C-arm systems that are placed within the interventional room enable 3-D imaging and guidance for stroke therapy without patient transfer. This can profit in drastically reduced time-totherapy, however, due to the interventional setting, the data acquisition is comparatively slow. Thus, involuntary patient motion needs to be estimated and compensated to achieve high image quality. Patient motion results in a misalignment of the geometry and the acquired image data. Consistency measures can be used to restore the correct mapping to compensate the motion. They describe constraints on an idealized imaging process which makes them also sensitive to beam hardening, scatter, truncation or overexposure. We propose a probabilistic approach based on the Student's t-distribution to model image artifacts that affect the consistency measure without sourcing from motion., Bilderverarbeitung fuer die Medizin (BVM) 2019
- Published
- 2018
43. The Pitfalls of Using Open Data to Develop Deep Learning Solutions for COVID-19 Detection in Chest X-Rays.
- Author
-
Harkness, Rachael, Hall, Geoff, Frangi, Alejandro F., Ravikumar, Nishant, and Zucker, Kieran
- Subjects
DATA science ,DEEP learning ,HIGH performance computing ,X-rays ,COVID-19 ,CHEST X rays ,MATHEMATICAL models ,RESPIRATORY infections ,CONFERENCES & conventions ,DIAGNOSTIC imaging ,THEORY ,POLYMERASE chain reaction ,HOSPITAL radiological services - Abstract
Since the emergence of COVID-19, deep learning models have been developed to identify COVID-19 from chest X-rays. With little to no direct access to hospital data, the AI community relies heavily on public data comprising numerous data sources. Model performance results have been exceptional when training and testing on open-source data, surpassing the reported capabilities of AI in pneumonia-detection prior to the COVID-19 outbreak. In this study impactful models are trained on a widely used open-source data and tested on an external test set and a hospital dataset, for the task of classifying chest X-rays into one of three classes: COVID-19, non-COVID pneumonia and no-pneumonia. Classification performance of the models investigated is evaluated through ROC curves, confusion matrices and standard classification metrics. Explainability modules are implemented to explore the image features most important to classification. Data analysis and model evalutions show that the popular open-source dataset COVIDx is not representative of the real clinical problem and that results from testing on this are inflated. Dependence on open-source data can leave models vulnerable to bias and confounding variables, requiring careful analysis to develop clinically useful/viable AI tools for COVID-19 detection in chest X-rays. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy Minimization for Multi-Modal Cardiac Image Segmentation.
- Author
-
Vesal, Sulaiman, Gu, Mingxuan, Kosti, Ronak, Maier, Andreas, and Ravikumar, Nishant
- Subjects
IMAGE segmentation ,CARDIAC imaging ,ENTROPY ,DATA distribution ,DEEP learning ,MAGNETIC resonance imaging - Abstract
Deep learning models are sensitive to domain shift phenomena. A model trained on images from one domain cannot generalise well when tested on images from a different domain, despite capturing similar anatomical structures. It is mainly because the data distribution between the two domains is different. Moreover, creating annotation for every new modality is a tedious and time-consuming task, which also suffers from high inter- and intra- observer variability. Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by leveraging source domain labelled data to generate labels for the target domain. However, current state-of-the-art (SOTA) UDA methods demonstrate degraded performance when there is insufficient data in source and target domains. In this paper, we present a novel UDA method for multi-modal cardiac image segmentation. The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces. The paper introduces an end-to-end framework that integrates: a) entropy minimization, b) output feature space alignment and c) a novel point-cloud shape adaptation based on the latent features learned by the segmentation model. We validated our method on two cardiac datasets by adapting from the annotated source domain, bSSFP-MRI (balanced Steady-State Free Procession-MRI), to the unannotated target domain, LGE-MRI (Late-gadolinium enhance-MRI), for the multi-sequence dataset; and from MRI (source) to CT (target) for the cross-modality dataset. The results highlighted that by enforcing adversarial learning in different parts of the network, the proposed method delivered promising performance, compared to other SOTA methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. Spatio-Temporal Multi-Task Learning for Cardiac MRI Left Ventricle Quantification.
- Author
-
Vesal, Sulaiman, Gu, Mingxuan, Maier, Andreas, and Ravikumar, Nishant
- Subjects
CARDIAC magnetic resonance imaging ,DEEP learning ,CARDIOVASCULAR disease diagnosis ,HEART ventricles ,HEART beat ,PREDICATE calculus - Abstract
Quantitative assessment of cardiac left ventricle (LV) morphology is essential to assess cardiac function and improve the diagnosis of different cardiovascular diseases. In current clinical practice, LV quantification depends on the measurement of myocardial shape indices, which is usually achieved by manual contouring of the endo- and epicardial. However, this process subjected to inter and intra-observer variability, and it is a time-consuming and tedious task. In this article, we propose a spatio-temporal multi-task learning approach to obtain a complete set of measurements quantifying cardiac LV morphology, regional-wall thickness (RWT), and additionally detecting the cardiac phase cycle (systole and diastole) for a given 3D Cine-magnetic resonance (MR) image sequence. We first segment cardiac LVs using an encoder-decoder network and then introduce a multitask framework to regress 11 LV indices and classify the cardiac phase, as parallel tasks during model optimization. The proposed deep learning model is based on the 3D spatio-temporal convolutions, which extract spatial and temporal features from MR images. We demonstrate the efficacy of the proposed method using cine-MR sequences of 145 subjects and comparing the performance with other state-of-the-art quantification methods. The proposed method obtained high prediction accuracy, with an average mean absolute error (MAE) of 129 mm $^2$ , 1.23 mm, 1.76 mm, Pearson correlation coefficient (PCC) of 96.4%, 87.2%, and 97.5% for LV and myocardium (Myo) cavity regions, 6 RWTs, 3 LV dimensions, and an error rate of 9.0% for phase classification. The experimental results highlight the robustness of the proposed method, despite varying degrees of cardiac morphology, image appearance, and low contrast in the cardiac MR sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. Contributors
- Author
-
Acosta, Oscar, Arridge, Simon, Barateau, Anais, Barbano, Riccardo, Bert, Julien, Burgos, Ninon, Chen, Hu, Chen, Kevin T., Chen, Xiaoran, Cheng, Li, Choi, Jae Hyuk, Chourak, Hilda, Curran, Walter J., Jr., de Crevoisier, Renaud, Deshpande, Srijay, Dewey, Blake E., Dowling, Jason, Drobnjak, Ivana, Ehrhardt, Jan, Eschweiler, Dennis, Frangi, Alejandro F., Graham, Mark, Greer, Peter, Han, Shuo, He, Yufan, Iglesias, Juan Eugenio, Jenkinson, Mark, Jin, Bangti, Ke, Wenchi, Kervrann, Charles, Konukoglu, Ender, Kovacheva, Violeta, Ladefoged, Claes N., Laube, Ina, Lei, Yang, Leo, Andrea, Li, Bowen, Li, Huiqi, Liu, Tian, Liu, Yihao, Mancini, Matteo, Minhas, Fayyaz, Nečasová, Tereza, Nie, Dong, Nunes, Jean-Claude, O'Connor, Laura, Oksuz, Ilkay, Olin, Anders B., Prince, Jerry L., Qiu, Richard L.J., Rajpoot, Nasir, Raniga, Parnesh, Ravikumar, Nishant, Remedios, Samuel W., Ruusuvuori, Pekka, Sarrut, David, Stegmaier, Johannes, Svoboda, David, Tanno, Ryutaro, Tsaftaris, Sotirios A., Ulman, Vladimír, Valvano, Gabriele, Wang, Tonghe, Wen, Xuyun, Wiesner, David, Wilms, Matthias, Xia, Yan, Yang, Xiaofeng, Zaharchuk, Greg, Zhang, Hui, Zhang, Yi, Zhao, Can, Zhao, He, and Zuo, Lianrui
- Published
- 2022
- Full Text
- View/download PDF
47. Action Learning for 3D Point Cloud Based Organ Segmentation
- Author
-
Zhong, Xia, Amrehn, Mario, Ravikumar, Nishant, Chen, Shuqing, Strobel, Norbert, Birkhold, Annette, Kowarschik, Markus, Fahrig, Rebecca, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
We propose a novel point cloud based 3D organ segmentation pipeline utilizing deep Q-learning. In order to preserve shape properties, the learning process is guided using a statistical shape model. The trained agent directly predicts piece-wise linear transformations for all vertices in each iteration. This mapping between the ideal transformation for an object outline estimation is learned based on image features. To this end, we introduce aperture features that extract gray values by sampling the 3D volume within the cone centered around the associated vertex and its normal vector. Our approach is also capable of estimating a hierarchical pyramid of non rigid deformations for multi-resolution meshes. In the application phase, we use a marginal approach to gradually estimate affine as well as non-rigid transformations. We performed extensive evaluations to highlight the robust performance of our approach on a variety of challenge data as well as clinical data. Additionally, our method has a run time ranging from 0.3 to 2.7 seconds to segment each organ. In addition, we show that the proposed method can be applied to different organs, X-ray based modalities, and scanning protocols without the need of transfer learning. As we learn actions, even unseen reference meshes can be processed as demonstrated in an example with the Visible Human. From this we conclude that our method is robust, and we believe that our method can be successfully applied to many more applications, in particular, in the interventional imaging space.
- Published
- 2018
48. Comparative Analysis of Unsupervised Algorithms for Breast MRI Lesion Segmentation
- Author
-
Vesal, Sulaiman, Ravikumar, Nishant, Ellman, Stephan, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Accurate segmentation of breast lesions is a crucial step in evaluating the characteristics of tumors. However, this is a challenging task, since breast lesions have sophisticated shape, topological structure, and variation in the intensity distribution. In this paper, we evaluated the performance of three unsupervised algorithms for the task of breast Magnetic Resonance (MRI) lesion segmentation, namely, Gaussian Mixture Model clustering, K-means clustering and a marker-controlled Watershed transformation based method. All methods were applied on breast MRI slices following selection of regions of interest (ROIs) by an expert radiologist and evaluated on 106 subjects' images, which include 59 malignant and 47 benign lesions. Segmentation accuracy was evaluated by comparing our results with ground truth masks, using the Dice similarity coefficient (DSC), Jaccard index (JI), Hausdorff distance and precision-recall metrics. The results indicate that the marker-controlled Watershed transformation outperformed all other algorithms investigated., 6 pages, submitted to Bildverarbeitung in der Medizin 2018
- Published
- 2018
49. Semi-Automatic Algorithm for Breast MRI Lesion Segmentation Using Marker-Controlled Watershed Transformation
- Author
-
Vesal, Sulaiman, Diaz-Pinto, Andres, Ravikumar, Nishant, Ellmann, Stephan, Davari, Amirabbas, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Magnetic resonance imaging (MRI) is an effective imaging modality for identifying and localizing breast lesions in women. Accurate and precise lesion segmentation using a computer-aided-diagnosis (CAD) system, is a crucial step in evaluating tumor volume and in the quantification of tumor characteristics. However, this is a challenging task, since breast lesions have sophisticated shape, topological structure, and high variance in their intensity distribution across patients. In this paper, we propose a novel marker-controlled watershed transformation-based approach, which uses the brightest pixels in a region of interest (determined by experts) as markers to overcome this challenge, and accurately segment lesions in breast MRI. The proposed approach was evaluated on 106 lesions, which includes 64 malignant and 42 benign cases. Segmentation results were quantified by comparison with ground truth labels, using the Dice similarity coefficient (DSC) and Jaccard index (JI) metrics. The proposed method achieved an average Dice coefficient of 0.7808$\pm$0.1729 and Jaccard index of 0.6704$\pm$0.2167. These results illustrate that the proposed method shows promise for future work related to the segmentation and classification of benign and malignant breast lesions.
- Published
- 2017
50. Frangi-Net: A Neural Network Approach to Vessel Segmentation
- Author
-
Fu, Weilin, Breininger, Katharina, W��rfl, Tobias, Ravikumar, Nishant, Schaffert, Roman, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
In this paper, we reformulate the conventional 2-D Frangi vesselness measure into a pre-weighted neural network ("Frangi-Net"), and illustrate that the Frangi-Net is equivalent to the original Frangi filter. Furthermore, we show that, as a neural network, Frangi-Net is trainable. We evaluate the proposed method on a set of 45 high resolution fundus images. After fine-tuning, we observe both qualitative and quantitative improvements in the segmentation quality compared to the original Frangi measure, with an increase up to $17\%$ in F1 score.
- Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.