27 results on '"Vesal, Sulaiman"'
Search Results
2. Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): an international, paired, non-inferiority, confirmatory study
- Author
-
Saha, Anindo, Bosma, Joeran S., Twilt, Jasper J., van Ginneken, Bram, Noordman, Constant R., Slootweg, Ivan, Roest, Christian, Fransen, Stefan J., Sunoqrot, Mohammed R.S., Bathen, Tone F., Rouw, Dennis, Immerzeel, Jos, Geerdink, Jeroen, van Run, Chris, Groeneveld, Miriam, Meakin, James, Karagöz, Ahmet, Bône, Alexandre, Routier, Alexandre, Marcoux, Arnaud, Abi-Nader, Clément, Li, Cynthia Xinran, Feng, Dagan, Alis, Deniz, Karaarslan, Ercan, Ahn, Euijoon, Nicolas, François, Sonn, Geoffrey A., Bhattacharya, Indrani, Kim, Jinman, Shi, Jun, Jahanandish, Hassan, An, Hong, Kan, Hongyu, Oksuz, Ilkay, Qiao, Liang, Rohé, Marc-Michel, Yergin, Mert, Khadra, Mohamed, Şeker, Mustafa E., Kartal, Mustafa S., Debs, Noëlie, Fan, Richard E., Saunders, Sara, Soerensen, Simon J.C., Moroianu, Stefania, Vesal, Sulaiman, Yuan, Yuan, Malakoti-Fard, Afsoun, Mačiūnien, Agnė, Kawashima, Akira, de Sousa Machadov, Ana M.M. de M.G., Moreira, Ana Sofia L., Ponsiglione, Andrea, Rappaport, Annelies, Stanzione, Arnaldo, Ciuvasovas, Arturas, Turkbey, Baris, de Keyzer, Bart, Pedersen, Bodil G., Eijlers, Bram, Chen, Christine, Riccardo, Ciabattoni, Courrech Staal, Ewout F.W., Jäderling, Fredrik, Langkilde, Fredrik, Aringhieri, Giacomo, Brembilla, Giorgio, Son, Hannah, Vanderlelij, Hans, Raat, Henricus P.J., Pikūnienė, Ingrida, Macova, Iva, Schoots, Ivo, Caglic, Iztok, Zawaideh, Jeries P., Wallström, Jonas, Bittencourt, Leonardo K., Khurram, Misbah, Choi, Moon H., Takahashi, Naoki, Tan, Nelly, Franco, Paolo N., Gutierrez, Patricia A., Thimansson, Per Erik, Hanus, Pieter, Puech, Philippe, Rau, Philipp R., de Visschere, Pieter, Guillaume, Ramette, Cuocolo, Renato, Falcão, Ricardo O., van Stiphout, Rogier S.A., Girometti, Rossano, Briediene, Ruta, Grigienė, Rūta, Gitau, Samuel, Withey, Samuel, Ghai, Sangeet, Penzkofer, Tobias, Barrett, Tristan, Tammisetti, Varaha S., Løgager, Vibeke B., Černý, Vladimír, Venderink, Wulphert, Law, Yan M., Lee, Young J., Bjartell, Anders, Padhani, Anwar R., Bonekamp, David, Villeirs, Geert, Salomon, Georg, Giannarini, Gianluca, Kalpathy-Cramer, Jayashree, Barentsz, Jelle, Maier-Hein, Klaus H., Rusu, Mirabela, Obuchowski, Nancy A., Rouvière, Olivier, van den Bergh, Roderick, Panebianco, Valeria, Kasivisvanathan, Veeru, Yakar, Derya, Elschot, Mattijs, Veltman, Jeroen, Fütterer, Jurgen J., de Rooij, Maarten, Huisman, Henkjan, Bosma, Joeran S, Twilt, Jasper J, Padhani, Anwar R, Maier-Hein, Klaus H, Obuchowski, Nancy A, and Fütterer, Jurgen J
- Published
- 2024
- Full Text
- View/download PDF
3. RAPHIA: A deep learning pipeline for the registration of MRI and whole-mount histopathology images of the prostate
- Author
-
Shao, Wei, Vesal, Sulaiman, Soerensen, Simon J.C., Bhattacharya, Indrani, Golestani, Negar, Yamashita, Rikiya, Kunder, Christian A., Fan, Richard E., Ghanouni, Pejman, Brooks, James D., Sonn, Geoffrey A., and Rusu, Mirabela
- Published
- 2024
- Full Text
- View/download PDF
4. Prediction and Mapping of Intraprostatic Tumor Extent with Artificial Intelligence
- Author
-
Priester, Alan, Fan, Richard E., Shubert, Joshua, Rusu, Mirabela, Vesal, Sulaiman, Shao, Wei, Khandwala, Yash Samir, Marks, Leonard S., Natarajan, Shyam, and Sonn, Geoffrey A.
- Published
- 2023
- Full Text
- View/download PDF
5. The Association of Tissue Change and Treatment Success During High-intensity Focused Ultrasound Focal Therapy for Prostate Cancer
- Author
-
Khandwala, Yash S., Soerensen, Simon John Christoph, Morisetty, Shravan, Ghanouni, Pejman, Fan, Richard E., Vesal, Sulaiman, Rusu, Mirabela, and Sonn, Geoffrey A.
- Published
- 2023
- Full Text
- View/download PDF
6. Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study
- Author
-
Vesal, Sulaiman, Gayo, Iani, Bhattacharya, Indrani, Natarajan, Shyam, Marks, Leonard S., Barratt, Dean C, Fan, Richard E., Hu, Yipeng, Sonn, Geoffrey A., and Rusu, Mirabela
- Published
- 2022
- Full Text
- View/download PDF
7. An Effective and Fast Hybrid Framework for Color Image Retrieval
- Author
-
Walia, Ekta, Vesal, Sulaiman, and Pal, Aman
- Published
- 2014
- Full Text
- View/download PDF
8. A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced Cardiac Magnetic Resonance Imaging
- Author
-
Xiong, Zhaohan, Xia, Qing, Hu, Zhiqiang, Huang, Ning, Bian, Cheng, Zheng, Yefeng, Vesal, Sulaiman, Ravikumar, Nishant, Maier, Andreas, Yang, Xin, Heng, Pheng-Ann, Ni, Dong, Li, Caizi, Tong, Qianqian, Si, Weixin, Puybareau, Elodie, Khoudli, Younes, Geraud, Thierry, Chen, Chen, Bai, Wenjia, Rueckert, Daniel, Xu, Lingchao, Zhuang, Xiahai, Luo, Xinzhe, Jia, Shuman, Sermesant, Maxime, Liu, Yashu, Wang, Kuanquan, Borra, Davide, Masci, Alessandro, Corsi, Cristiana, de Vente, Coen, Veta, Mitko, Karim, Rashed, Preetha, Chandrakanth Jayachandran, Engelhardt, Sandy, Qiao, Menyun, Wang, Yuanyuan, Tao, Qian, Nunez-Garcia, Marta, Camara, Oscar, Savioli, Nicolo, Lamata, Pablo, and Zhao, Jichao
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (stat.ML) ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Segmentation of cardiac images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) widely used for visualizing diseased cardiac structures, is a crucial first step for clinical diagnosis and treatment. However, direct segmentation of LGE-MRIs is challenging due to its attenuated contrast. Since most clinical studies have relied on manual and labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the "2018 Left Atrium Segmentation Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double, sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved far superior results than traditional methods and pipelines containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for cardiac LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field.
- Published
- 2020
9. COPD Classification in CT Images Using a 3D Convolutional Neural Network
- Author
-
Ahmed, Jalil, Vesal, Sulaiman, Durlak, Felix, Kaergel, Rainer, Ravikumar, Nishant, Remy-Jardin, Martine, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,respiratory tract diseases - Abstract
Chronic obstructive pulmonary disease (COPD) is a lung disease that is not fully reversible and one of the leading causes of morbidity and mortality in the world. Early detection and diagnosis of COPD can increase the survival rate and reduce the risk of COPD progression in patients. Currently, the primary examination tool to diagnose COPD is spirometry. However, computed tomography (CT) is used for detecting symptoms and sub-type classification of COPD. Using different imaging modalities is a difficult and tedious task even for physicians and is subjective to inter-and intra-observer variations. Hence, developing meth-ods that can automatically classify COPD versus healthy patients is of great interest. In this paper, we propose a 3D deep learning approach to classify COPD and emphysema using volume-wise annotations only. We also demonstrate the impact of transfer learning on the classification of emphysema using knowledge transfer from a pre-trained COPD classification model.
- Published
- 2020
10. Deep Learning-based Denoising of Mammographic Images using Physics-driven Data Augmentation
- Author
-
Eckert, Dominik, Vesal, Sulaiman, Ritschl, Ludwig, Kappler, Steffen, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science::Computer Vision and Pattern Recognition ,Image and Video Processing (eess.IV) ,Physics::Medical Physics ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Mammography is using low-energy X-rays to screen the human breast and is utilized by radiologists to detect breast cancer. Typically radiologists require a mammogram with impeccable image quality for an accurate diagnosis. In this study, we propose a deep learning method based on Convolutional Neural Networks (CNNs) for mammogram denoising to improve the image quality. We first enhance the noise level and employ Anscombe Transformation (AT) to transform Poisson noise to white Gaussian noise. With this data augmentation, a deep residual network is trained to learn the noise map of the noisy images. We show, that the proposed method can remove not only simulated but also real noise. Furthermore, we also compare our results with state-of-the-art denoising methods, such as BM3D and DNCNN. In an early investigation, we achieved qualitatively better mammogram denoising results., Accepted at BVM 2020
- Published
- 2019
11. PD50-01 AI VS. UROLOGISTS: A COMPARATIVE ANALYSIS FOR PROSTATE CANCER DETECTION ON TRANSRECTAL B-MODE ULTRASOUND.
- Author
-
Vesal, Sulaiman, Bhattacharya, Indrani, Jahanandish, Hassan, Choi, Moonhyung, Zhou, Steve Ran, Kornberg, Zachary, Sommer, Elijah Richard, Fan, Richard E., Rusu, Mirabela, and Sonn, Geoffrey A.
- Subjects
ENDORECTAL ultrasonography ,PROSTATE cancer ,EARLY detection of cancer ,UROLOGISTS ,ARTIFICIAL intelligence - Published
- 2024
- Full Text
- View/download PDF
12. PD27-03 A DEEP LEARNING MODEL FOR AUTOMATED PROSTATE CANCER DETECTION ON MICRO-ULTRASOUND.
- Author
-
Zhou, Steve R., Zhang, Lichun, Choi, Moon Hyung, Vesal, Sulaiman, Fan, Richard E., Sonn, Geoffrey, and Rusu, Mirabela
- Subjects
DEEP learning ,PROSTATE cancer ,EARLY detection of cancer ,MAGNETIC resonance imaging - Published
- 2024
- Full Text
- View/download PDF
13. MP31-18 ARTIFICIAL INTELLIGENCE-ASSISTED PROSTATE CANCER DETECTION ON B-MODE TRANSRECTAL ULTRASOUND IMAGES.
- Author
-
Bhattacharya, Indrani, Vesal, Sulaiman, Jahanandish, Hassan, Choi, Moonhyung, Zhou, Steve, Kornberg, Zachary, Sommer, Elijah Richard, Fan, Richard E., Brooks, James D., Rusu, Mirabela, and Sonn, Geoffrey A.
- Subjects
ENDORECTAL ultrasonography ,ULTRASONIC imaging ,EARLY detection of cancer ,ARTIFICIAL neural networks ,PROSTATE cancer ,CONVOLUTIONAL neural networks - Published
- 2024
- Full Text
- View/download PDF
14. MP19-17 INTEGRATING MR AND ULTRASOUND IMAGES FOR AI-BASED PROSTATE CANCER DETECTION IN TRANSRECTAL ULTRASOUND IMAGES: A COMPARATIVE ASSESSMENT WITH CLINICIANS.
- Author
-
Jahanandish, Hassan, Vesal, Sulaiman, Bhattacharya, Indrani, Kornberg, Zachary, Zhou, Steve Ran, Sommer, Elijah Richard, Choi, Moon Hyung, Fan, Richard E., Rusu, Mirabela, and Sonn, Geoffrey A.
- Subjects
ENDORECTAL ultrasonography ,ULTRASONIC imaging ,MAGNETIC resonance imaging ,ARTIFICIAL intelligence ,EARLY detection of cancer - Published
- 2024
- Full Text
- View/download PDF
15. A review of artificial intelligence in prostate cancer detection on imaging.
- Author
-
Bhattacharya, Indrani, Khandwala, Yash S., Vesal, Sulaiman, Shao, Wei, Yang, Qianye, Soerensen, Simon J.C., Fan, Richard E., Ghanouni, Pejman, Kunder, Christian A., Brooks, James D., Hu, Yipeng, Rusu, Mirabela, and Sonn, Geoffrey A.
- Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. A multi-stage fully convolutional network for cardiac MRI segmentation
- Author
-
Vesal, Sulaiman, Maier, Andreas, and Nishant Ravikumar
- Published
- 2019
- Full Text
- View/download PDF
17. Dilated Residual U-NET for Multi-organ Segmentation in Thoracic CT
- Author
-
Vesal, Sulaiman, Nishant Ravikumar, and Maier, Andreas
- Published
- 2019
- Full Text
- View/download PDF
18. Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy Minimization for Multi-Modal Cardiac Image Segmentation.
- Author
-
Vesal, Sulaiman, Gu, Mingxuan, Kosti, Ronak, Maier, Andreas, and Ravikumar, Nishant
- Subjects
- *
IMAGE segmentation , *CARDIAC imaging , *ENTROPY , *DATA distribution , *DEEP learning , *MAGNETIC resonance imaging - Abstract
Deep learning models are sensitive to domain shift phenomena. A model trained on images from one domain cannot generalise well when tested on images from a different domain, despite capturing similar anatomical structures. It is mainly because the data distribution between the two domains is different. Moreover, creating annotation for every new modality is a tedious and time-consuming task, which also suffers from high inter- and intra- observer variability. Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by leveraging source domain labelled data to generate labels for the target domain. However, current state-of-the-art (SOTA) UDA methods demonstrate degraded performance when there is insufficient data in source and target domains. In this paper, we present a novel UDA method for multi-modal cardiac image segmentation. The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces. The paper introduces an end-to-end framework that integrates: a) entropy minimization, b) output feature space alignment and c) a novel point-cloud shape adaptation based on the latent features learned by the segmentation model. We validated our method on two cardiac datasets by adapting from the annotated source domain, bSSFP-MRI (balanced Steady-State Free Procession-MRI), to the unannotated target domain, LGE-MRI (Late-gadolinium enhance-MRI), for the multi-sequence dataset; and from MRI (source) to CT (target) for the cross-modality dataset. The results highlighted that by enforcing adversarial learning in different parts of the network, the proposed method delivered promising performance, compared to other SOTA methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Spatio-Temporal Multi-Task Learning for Cardiac MRI Left Ventricle Quantification.
- Author
-
Vesal, Sulaiman, Gu, Mingxuan, Maier, Andreas, and Ravikumar, Nishant
- Subjects
CARDIAC magnetic resonance imaging ,DEEP learning ,CARDIOVASCULAR disease diagnosis ,HEART ventricles ,HEART beat ,PREDICATE calculus - Abstract
Quantitative assessment of cardiac left ventricle (LV) morphology is essential to assess cardiac function and improve the diagnosis of different cardiovascular diseases. In current clinical practice, LV quantification depends on the measurement of myocardial shape indices, which is usually achieved by manual contouring of the endo- and epicardial. However, this process subjected to inter and intra-observer variability, and it is a time-consuming and tedious task. In this article, we propose a spatio-temporal multi-task learning approach to obtain a complete set of measurements quantifying cardiac LV morphology, regional-wall thickness (RWT), and additionally detecting the cardiac phase cycle (systole and diastole) for a given 3D Cine-magnetic resonance (MR) image sequence. We first segment cardiac LVs using an encoder-decoder network and then introduce a multitask framework to regress 11 LV indices and classify the cardiac phase, as parallel tasks during model optimization. The proposed deep learning model is based on the 3D spatio-temporal convolutions, which extract spatial and temporal features from MR images. We demonstrate the efficacy of the proposed method using cine-MR sequences of 145 subjects and comparing the performance with other state-of-the-art quantification methods. The proposed method obtained high prediction accuracy, with an average mean absolute error (MAE) of 129 mm $^2$ , 1.23 mm, 1.76 mm, Pearson correlation coefficient (PCC) of 96.4%, 87.2%, and 97.5% for LV and myocardium (Myo) cavity regions, 6 RWTs, 3 LV dimensions, and an error rate of 9.0% for phase classification. The experimental results highlight the robustness of the proposed method, despite varying degrees of cardiac morphology, image appearance, and low contrast in the cardiac MR sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Comparative Analysis of Unsupervised Algorithms for Breast MRI Lesion Segmentation
- Author
-
Vesal, Sulaiman, Ravikumar, Nishant, Ellman, Stephan, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Accurate segmentation of breast lesions is a crucial step in evaluating the characteristics of tumors. However, this is a challenging task, since breast lesions have sophisticated shape, topological structure, and variation in the intensity distribution. In this paper, we evaluated the performance of three unsupervised algorithms for the task of breast Magnetic Resonance (MRI) lesion segmentation, namely, Gaussian Mixture Model clustering, K-means clustering and a marker-controlled Watershed transformation based method. All methods were applied on breast MRI slices following selection of regions of interest (ROIs) by an expert radiologist and evaluated on 106 subjects' images, which include 59 malignant and 47 benign lesions. Segmentation accuracy was evaluated by comparing our results with ground truth masks, using the Dice similarity coefficient (DSC), Jaccard index (JI), Hausdorff distance and precision-recall metrics. The results indicate that the marker-controlled Watershed transformation outperformed all other algorithms investigated., 6 pages, submitted to Bildverarbeitung in der Medizin 2018
- Published
- 2018
21. Semi-Automatic Algorithm for Breast MRI Lesion Segmentation Using Marker-Controlled Watershed Transformation
- Author
-
Vesal, Sulaiman, Diaz-Pinto, Andres, Ravikumar, Nishant, Ellmann, Stephan, Davari, Amirabbas, and Maier, Andreas
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Magnetic resonance imaging (MRI) is an effective imaging modality for identifying and localizing breast lesions in women. Accurate and precise lesion segmentation using a computer-aided-diagnosis (CAD) system, is a crucial step in evaluating tumor volume and in the quantification of tumor characteristics. However, this is a challenging task, since breast lesions have sophisticated shape, topological structure, and high variance in their intensity distribution across patients. In this paper, we propose a novel marker-controlled watershed transformation-based approach, which uses the brightest pixels in a region of interest (determined by experts) as markers to overcome this challenge, and accurately segment lesions in breast MRI. The proposed approach was evaluated on 106 lesions, which includes 64 malignant and 42 benign cases. Segmentation results were quantified by comparison with ground truth labels, using the Dice similarity coefficient (DSC) and Jaccard index (JI) metrics. The proposed method achieved an average Dice coefficient of 0.7808$\pm$0.1729 and Jaccard index of 0.6704$\pm$0.2167. These results illustrate that the proposed method shows promise for future work related to the segmentation and classification of benign and malignant breast lesions.
- Published
- 2017
22. Fully Automated 3D Cardiac MRI Localisation and Segmentation Using Deep Neural Networks.
- Author
-
Vesal, Sulaiman, Maier, Andreas, and Ravikumar, Nishant
- Subjects
CARDIAC magnetic resonance imaging ,NEURAL circuitry ,MAGNETIC resonance imaging ,CARDIOVASCULAR diseases ,CARDIAC imaging - Abstract
Cardiac magnetic resonance (CMR) imaging is used widely for morphological assessment and diagnosis of various cardiovascular diseases. Deep learning approaches based on 3D fully convolutional networks (FCNs), have improved state-of-the-art segmentation performance in CMR images. However, previous methods have employed several pre-processing steps and have focused primarily on segmenting low-resolutions images. A crucial step in any automatic segmentation approach is to first localize the cardiac structure of interest within the MRI volume, to reduce false positives and computational complexity. In this paper, we propose two strategies for localizing and segmenting the heart ventricles and myocardium, termed multi-stage and end-to-end, using a 3D convolutional neural network. Our method consists of an encoder-decoder network that is first trained to predict a coarse localized density map of the target structure at a low resolution. Subsequently, a second similar network employs this coarse density map to crop the image at a higher resolution, and consequently, segment the target structure. For the latter, the same two-stage architecture is trained end-to-end. The 3D U-Net with some architectural changes (referred to as 3D DR-UNet) was used as the base architecture in this framework for both the multi-stage and end-to-end strategies. Moreover, we investigate whether the incorporation of coarse features improves the segmentation. We evaluate the two proposed segmentation strategies on two cardiac MRI datasets, namely, the Automatic Cardiac Segmentation Challenge (ACDC) STACOM 2017, and Left Atrium Segmentation Challenge (LASC) STACOM 2018. Extensive experiments and comparisons with other state-of-the-art methods indicate that the proposed multi-stage framework consistently outperforms the rest in terms of several segmentation metrics. The experimental results highlight the robustness of the proposed approach, and its ability to generate accurate high-resolution segmentations, despite the presence of varying degrees of pathology-induced changes to cardiac morphology and image appearance, low contrast, and noise in the CMR volumes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Implementation of machine learning into clinical breast MRI: Potential for objective and accurate decision-making in suspicious breast masses.
- Author
-
Ellmann, Stephan, Wenkel, Evelyn, Dietzel, Matthias, Bielowski, Christian, Vesal, Sulaiman, Maier, Andreas, Hammon, Matthias, Janka, Rolf, Fasching, Peter A., Beckmann, Matthias W., Schulz Wendtland, Rüdiger, Uder, Michael, and Bäuerle, Tobias
- Subjects
MACHINE learning ,INTRACLASS correlation ,SUPPORT vector machines ,KERNEL functions ,WEB-based user interfaces - Abstract
We investigated whether the integration of machine learning (ML) into MRI interpretation can provide accurate decision rules for the management of suspicious breast masses. A total of 173 consecutive patients with suspicious breast masses upon complementary assessment (BI-RADS IV/V: n = 100/76) received standardized breast MRI prior to histological verification. MRI findings were independently assessed by two observers (R1/R2: 5 years of experience/no experience in breast MRI) using six (semi-)quantitative imaging parameters. Interobserver variability was studied by ICC (intraclass correlation coefficient). A polynomial kernel function support vector machine was trained to differentiate between benign and malignant lesions based on the six imaging parameters and patient age. Ten-fold cross-validation was applied to prevent overfitting. Overall diagnostic accuracy and decision rules (rule-out criteria) to accurately exclude malignancy were evaluated. Results were integrated into a web application and published online. Malignant lesions were present in 107 patients (60.8%). Imaging features showed excellent interobserver variability (ICC: 0.81–0.98) with variable diagnostic accuracy (AUC: 0.65–0.82). Overall performance of the ML algorithm was high (AUC = 90.1%; BI-RADS IV: AUC = 91.6%). The ML algorithm provided decision rules to accurately rule-out malignancy with a false negative rate <1% in 31.3% of the BI-RADS IV cases. Thus, integration of ML into MRI interpretation can provide objective and accurate decision rules for the management of suspicious breast masses, and could help to reduce the number of potentially unnecessary biopsies. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Deep learning based denoising of mammographic x-ray images: an investigation of loss functions and their detail-preserving properties.
- Author
-
Eckert, Dominik, Ritschl, Ludwig, Herbst, Magdalena, Wicklein, Julia, Vesal, Sulaiman, Kappler, Steffen, Maier, Andreas, and Stober, Sebastian
- Published
- 2021
- Full Text
- View/download PDF
25. Cardiac segmentation on late gadolinium enhancement MRI: A benchmark study from multi-sequence cardiac MR segmentation challenge.
- Author
-
Zhuang, Xiahai, Xu, Jiahang, Luo, Xinzhe, Chen, Chen, Ouyang, Cheng, Rueckert, Daniel, Campello, Victor M., Lekadir, Karim, Vesal, Sulaiman, RaviKumar, Nishant, Liu, Yashu, Luo, Gongning, Chen, Jingkun, Li, Hongwei, Ly, Buntheng, Sermesant, Maxime, Roth, Holger, Zhu, Wentao, Wang, Jiexiang, and Ding, Xinghao
- Subjects
- *
ARTIFICIAL neural networks , *CARDIAC magnetic resonance imaging , *MAGNETIC resonance imaging , *GADOLINIUM , *MYOCARDIAL infarction - Abstract
• Present the methodologies and evaluation results for the cardiac segmentation algorithms selected from the submissions to the MS-CMRSeg challenge, in conjunction with MICCAI 2019. • Provide a fair and intuitive comparison between the supervised methods and UDA algorithms for cardiac segmentation. • Provide datasets and evaluation tools for an ongoing development of MS-CMR based cardiac segmentation algorithms. [Display omitted] Accurate computing, analysis and modeling of the ventricles and myocardium from medical images are important, especially in the diagnosis and treatment management for patients suffering from myocardial infarction (MI). Late gadolinium enhancement (LGE) cardiac magnetic resonance (CMR) provides an important protocol to visualize MI. However, compared with the other sequences LGE CMR images with gold standard labels are particularly limited. This paper presents the selective results from the Multi-Sequence Cardiac MR (MS-CMR) Segmentation challenge, in conjunction with MICCAI 2019. The challenge offered a data set of paired MS-CMR images, including auxiliary CMR sequences as well as LGE CMR, from 45 patients who underwent cardiomyopathy. It was aimed to develop new algorithms, as well as benchmark existing ones for LGE CMR segmentation focusing on myocardial wall of the left ventricle and blood cavity of the two ventricles. In addition, the paired MS-CMR images could enable algorithms to combine the complementary information from the other sequences for the ventricle segmentation of LGE CMR. Nine representative works were selected for evaluation and comparisons, among which three methods are unsupervised domain adaptation (UDA) methods and the other six are supervised. The results showed that the average performance of the nine methods was comparable to the inter-observer variations. Particularly, the top-ranking algorithms from both the supervised and UDA methods could generate reliable and robust segmentation results. The success of these methods was mainly attributed to the inclusion of the auxiliary sequences from the MS-CMR images, which provide important label information for the training of deep neural networks. The challenge continues as an ongoing resource, and the gold standard segmentation as well as the MS-CMR images of both the training and test data are available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mscmrseg/). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging.
- Author
-
Xiong, Zhaohan, Xia, Qing, Hu, Zhiqiang, Huang, Ning, Bian, Cheng, Zheng, Yefeng, Vesal, Sulaiman, Ravikumar, Nishant, Maier, Andreas, Yang, Xin, Heng, Pheng-Ann, Ni, Dong, Li, Caizi, Tong, Qianqian, Si, Weixin, Puybareau, Elodie, Khoudli, Younes, Géraud, Thierry, Chen, Chen, and Bai, Wenjia
- Subjects
- *
LEFT heart atrium , *CARDIAC magnetic resonance imaging , *CONVOLUTIONAL neural networks , *IMAGE segmentation , *GADOLINIUM , *FLUOROSCOPY , *ATRIAL fibrillation - Abstract
• A benchmark study of a global segmentation challenge conducted on the largest atrial LGE-MRI dataset. • Performed rigorous subgroup analysis and hyper-parameter tuning experiments. • U-Net achieved better performance compared to others. • 2D and 3D CNN methods had comparable accuracies. • The double, sequentially used CNNs achieved superior results. Segmentation of medical images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) used for visualizing diseased atrial structures, is a crucial first step for ablation treatment of atrial fibrillation. However, direct segmentation of LGE-MRIs is challenging due to the varying intensities caused by contrast agents. Since most clinical studies have relied on manual, labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the 2018 Left Atrium Segmentation Challenge using 154 3D LGE-MRIs, currently the world's largest atrial LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show that the top method achieved a Dice score of 93.2% and a mean surface to surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved superior results than traditional methods and machine learning approaches containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for atrial LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Furthermore, the findings from this study can potentially be extended to other imaging datasets and modalities, having an impact on the wider medical imaging community. Image, graphical abstract [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning.
- Author
-
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, and Heinrich MP
- Subjects
- Humans, Algorithms, Brain diagnostic imaging, Abdomen diagnostic imaging, Image Processing, Computer-Assisted methods, Deep Learning, Abdominal Cavity
- Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.