44 results on '"Astaraki, Mehdi"'
Search Results
2. Overall survival prediction for high-grade glioma patients using mathematical modeling of tumor cell infiltration
- Author
-
Häger, Wille, Toma-Daşu, Iuliana, Astaraki, Mehdi, Lazzeroni, Marta, Häger, Wille, Toma-Daşu, Iuliana, Astaraki, Mehdi, and Lazzeroni, Marta
- Abstract
Purpose: This study aimed at applying a mathematical framework for the prediction of high-grade gliomas (HGGs) cell invasion into normal tissues for guiding the clinical target delineation, and at investigating the possibility of using tumor infiltration maps for patient overall survival (OS) prediction. Material & methods: A model describing tumor infiltration into normal tissue was applied to 93 HGG cases. Tumor infiltration maps and corresponding isocontours with different cell densities were produced. ROC curves were used to seek correlations between the patient OS and the volume encompassed by a particular isocontour. Area-Under-the-Curve (AUC) values were used to determine the isocontour having the highest predictive ability. The optimal cut-off volume, having the highest sensitivity and specificity, for each isocontour was used to divide the patients in two groups for a Kaplan-Meier survival analysis. Results: The highest AUC value was obtained for the isocontour of cell densities 1000 cells/mm3 and 2000 cells/mm3, equal to 0.77 (p < 0.05). Correlation with the GTV yielded an AUC of 0.73 (p < 0.05). The Kaplan-Meier survival analysis using the 1000 cells/mm3 isocontour and the ROC optimal cut-off volume for patient group selection rendered a hazard ratio (HR) of 2.7 (p < 0.05), while the GTV rendered a HR = 1.6 (p < 0.05). Conclusion: The simulated tumor cell invasion is a stronger predictor of overall survival than the segmented GTV, indicating the importance of using mathematical models for cell invasion to assist in the definition of the target for HGG patients.
- Published
- 2023
- Full Text
- View/download PDF
3. Overall survival prediction for high-grade glioma patients using mathematical modeling of tumor cell infiltration
- Author
-
Häger, Wille, Toma-Dașu, Iuliana, Astaraki, Mehdi, Lazzeroni, Marta, Häger, Wille, Toma-Dașu, Iuliana, Astaraki, Mehdi, and Lazzeroni, Marta
- Abstract
Purpose: This study aimed at applying a mathematical framework for the prediction of high-grade gliomas (HGGs) cell invasion into normal tissues for guiding the clinical target delineation, and at investigating the possibility of using tumor infiltration maps for patient overall survival (OS) prediction. Material & methods: A model describing tumor infiltration into normal tissue was applied to 93 HGG cases. Tumor infiltration maps and corresponding isocontours with different cell densities were produced. ROC curves were used to seek correlations between the patient OS and the volume encompassed by a particular isocontour. Area-Under-the-Curve (AUC) values were used to determine the isocontour having the highest predictive ability. The optimal cut-off volume, having the highest sensitivity and specificity, for each isocontour was used to divide the patients in two groups for a Kaplan-Meier survival analysis. Results: The highest AUC value was obtained for the isocontour of cell densities 1000 cells/mm3 and 2000 cells/mm3, equal to 0.77 (p < 0.05). Correlation with the GTV yielded an AUC of 0.73 (p < 0.05). The Kaplan-Meier survival analysis using the 1000 cells/mm3 isocontour and the ROC optimal cut-off volume for patient group selection rendered a hazard ratio (HR) of 2.7 (p < 0.05), while the GTV rendered a HR = 1.6 (p < 0.05). Conclusion: The simulated tumor cell invasion is a stronger predictor of overall survival than the segmented GTV, indicating the importance of using mathematical models for cell invasion to assist in the definition of the target for HGG patients., QC 20230830
- Published
- 2023
- Full Text
- View/download PDF
4. AutoPaint: A Self-Inpainting Method for Unsupervised Anomaly Detection
- Author
-
Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, Wendler, Thomas, Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, and Wendler, Thomas
- Abstract
Robust and accurate detection and segmentation of heterogenous tumors appearing in different anatomical organs with supervised methods require large-scale labeled datasets covering all possible types of diseases. Due to the unavailability of such rich datasets and the high cost of annotations, unsupervised anomaly detection (UAD) methods have been developed aiming to detect the pathologies as deviation from the normality by utilizing the unlabeled healthy image data. However, developed UAD models are often trained with an incomplete distribution of healthy anatomies and have difficulties in preserving anatomical constraints. This work intends to, first, propose a robust inpainting model to learn the details of healthy anatomies and reconstruct high-resolution images by preserving anatomical constraints. Second, we propose an autoinpainting pipeline to automatically detect tumors, replace their appearance with the learned healthy anatomies, and based on that segment the tumoral volumes in a purely unsupervised fashion. Three imaging datasets, including PET, CT, and PET-CT scans of lung tumors and head and neck tumors, are studied as benchmarks for evaluation. Experimental results demonstrate the significant superiority of the proposed method over a wide range of state-of-the-art UAD methods. Moreover, the unsupervised method we propose produces comparable results to a robust supervised segmentation method when applied to multimodal images., Comment: 41 pages, 15 figures, follow-up paper to conference abstract at yearly meeting of German Nuclear Medicine in 2022
- Published
- 2023
5. SegRap2023: A Benchmark of Organs-at-Risk and Gross Tumor Volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma
- Author
-
Luo, Xiangde, Fu, Jia, Zhong, Yunxin, Liu, Shuolin, Han, Bing, Astaraki, Mehdi, Bendazzoli, Simone, Toma-Dasu, Iuliana, Ye, Yiwen, Chen, Ziyang, Xia, Yong, Su, Yanzhou, Ye, Jin, He, Junjun, Xing, Zhaohu, Wang, Hongqiu, Zhu, Lei, Yang, Kaixiang, Fang, Xin, Wang, Zhiwei, Lee, Chan Woong, Park, Sang Joon, Chun, Jaehee, Ulrich, Constantin, Maier-Hein, Klaus H., Ndipenoch, Nchongmaje, Miron, Alina, Li, Yongmin, Zhang, Yimeng, Chen, Yu, Bai, Lu, Huang, Jinlong, An, Chengyang, Wang, Lisheng, Huang, Kaiwen, Gu, Yunqi, Zhou, Tao, Zhou, Mu, Zhang, Shichuan, Liao, Wenjun, Wang, Guotai, Zhang, Shaoting, Luo, Xiangde, Fu, Jia, Zhong, Yunxin, Liu, Shuolin, Han, Bing, Astaraki, Mehdi, Bendazzoli, Simone, Toma-Dasu, Iuliana, Ye, Yiwen, Chen, Ziyang, Xia, Yong, Su, Yanzhou, Ye, Jin, He, Junjun, Xing, Zhaohu, Wang, Hongqiu, Zhu, Lei, Yang, Kaixiang, Fang, Xin, Wang, Zhiwei, Lee, Chan Woong, Park, Sang Joon, Chun, Jaehee, Ulrich, Constantin, Maier-Hein, Klaus H., Ndipenoch, Nchongmaje, Miron, Alina, Li, Yongmin, Zhang, Yimeng, Chen, Yu, Bai, Lu, Huang, Jinlong, An, Chengyang, Wang, Lisheng, Huang, Kaiwen, Gu, Yunqi, Zhou, Tao, Zhou, Mu, Zhang, Shichuan, Liao, Wenjun, Wang, Guotai, and Zhang, Shaoting
- Abstract
Radiation therapy is a primary and effective NasoPharyngeal Carcinoma (NPC) treatment strategy. The precise delineation of Gross Tumor Volumes (GTVs) and Organs-At-Risk (OARs) is crucial in radiation treatment, directly impacting patient prognosis. Previously, the delineation of GTVs and OARs was performed by experienced radiation oncologists. Recently, deep learning has achieved promising results in many medical image segmentation tasks. However, for NPC OARs and GTVs segmentation, few public datasets are available for model development and evaluation. To alleviate this problem, the SegRap2023 challenge was organized in conjunction with MICCAI2023 and presented a large-scale benchmark for OAR and GTV segmentation with 400 Computed Tomography (CT) scans from 200 NPC patients, each with a pair of pre-aligned non-contrast and contrast-enhanced CT scans. The challenge's goal was to segment 45 OARs and 2 GTVs from the paired CT scans. In this paper, we detail the challenge and analyze the solutions of all participants. The average Dice similarity coefficient scores for all submissions ranged from 76.68\% to 86.70\%, and 70.42\% to 73.44\% for OARs and GTVs, respectively. We conclude that the segmentation of large-size OARs is well-addressed, and more efforts are needed for GTVs and small-size or thin-structure OARs. The benchmark will remain publicly available here: https://segrap2023.grand-challenge.org, Comment: A challenge report of SegRap2023 (organized in conjunction with MICCAI2023)
- Published
- 2023
6. Fully Automatic Segmentation of Gross Target Volume and Organs-at-Risk for Radiotherapy Planning of Nasopharyngeal Carcinoma
- Author
-
Astaraki, Mehdi, Bendazzoli, Simone, Toma-Dasu, Iuliana, Astaraki, Mehdi, Bendazzoli, Simone, and Toma-Dasu, Iuliana
- Abstract
Target segmentation in CT images of Head&Neck (H&N) region is challenging due to low contrast between adjacent soft tissue. The SegRap 2023 challenge has been focused on benchmarking the segmentation algorithms of Nasopharyngeal Carcinoma (NPC) which would be employed as auto-contouring tools for radiation treatment planning purposes. We propose a fully-automatic framework and develop two models for a) segmentation of 45 Organs at Risk (OARs) and b) two Gross Tumor Volumes (GTVs). To this end, we preprocess the image volumes by harmonizing the intensity distributions and then automatically cropping the volumes around the target regions. The preprocessed volumes were employed to train a standard 3D U-Net model for each task, separately. Our method took second place for each of the tasks in the validation phase of the challenge. The proposed framework is available at https://github.com/Astarakee/segrap2023, Comment: 9 pages, 5 figures, 3 tables, MICCAI SegRap challenge contribution
- Published
- 2023
7. Advanced machine learning methods for oncological image analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the p
- Published
- 2022
8. Advanced machine learning methods for oncological image analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the p
- Published
- 2022
9. Advanced machine learning methods for oncological image analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the p
- Published
- 2022
10. CTV Delineation for High-Grade Gliomas : Is There Agreement With Tumor Cell Invasion Models?
- Author
-
Häger, Wille, Lazzeroni, Marta, Astaraki, Mehdi, Toma-Daşu, Iuliana, Häger, Wille, Lazzeroni, Marta, Astaraki, Mehdi, and Toma-Daşu, Iuliana
- Abstract
Purpose: High-grade glioma (HGG) is a common form of malignant primary brain cancer with poor prognosis. The diffusive nature of HGGs implies that tumor cell invasion of normal tissue extends several centimeters away from the visible gross tumor volume (GTV). The standard methodology for clinical volume target (CTV) delineation is to apply a 2- to 3-cm margin around the GTV. However, tumor recurrence is extremely frequent. The purpose of this paper was to introduce a framework and computational model for the prediction of normal tissue HGG cell invasion and to investigate the agreement of the conventional CTV delineation with respect to the predicted tumor invasion. Methods and Materials: A model for HGG cell diffusion and proliferation was implemented and used to assess the tumor invasion patterns for 112 cases of HGGs. Normal brain structures and tissues as well as the GTVs visible on diagnostic images were delineated using automated methods. The volumes encompassed by different tumor cell concentration isolines calculated using the model for invasion were compared with the conventionally delineated CTVs, and the differences were analyzed. The 3-dimensional-Hausdorff distance between the CTV and the volumes encompassed by various isolines was also calculated. Results: In 50% of cases, the CTV failed to encompass regions containing tumor cell concentrations of 614 cells/mm³ or greater. In 84% of cases, the lowest cell concentration completely encompassed by the CTV was ≥1 cell/mm³. In the remaining 16%, the CTV overextended into normal tissue. The Hausdorff distance was on average comparable to the CTV margin. Conclusions: The standard methodology for CTV delineation appears to be inconsistent with HGG invasion patterns in terms of size and shape. Tumor invasion modeling could therefore be useful in assisting in the CTV delineation for HGGs.
- Published
- 2022
- Full Text
- View/download PDF
11. Advanced Machine Learning Methods for Oncological Image Analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the pr, Cancer är en global hälsoutmaning som uppskattas ansvara för cirka 10 miljoner dödsfall i hela världen, bara under året 2020. Framsteg inom medicinsk bildtagning och hårdvaruutveckling de senaste tre decennierna har banat vägen för moderna medicinska bildgivande system vars upplösningsförmåga tillåter att fånga information om tumörers anatomi, fysiologi, funktion samt metabolism. Medicinsk bildanalys har därför fått en mer betydelserik roll i klinikers dagliga rutiner inom onkologin, för bland annat screening, diagnostik, uppföljning av behandling samt icke-invasiv utvärdering av sjukdomsprognoser. Sjukvårdens behov av medicinska bilder har lett till att det nu på sjukhusen finns en enorm mängd medicinska bilder på alla moderna sjukhus. Med hänsyn till den viktiga roll medicinsk bilddata spelar i dagens sjukvård, samt den mängd manuellt arbete som behöver göras för att analysera den mängd data som genereras varje dag, så har utvecklingen av digitala verktyg för att för att automatiskt eller semi-automatiskt analysera bilddatan alltid haft stort intresse. Därför har en rad maskininlärningsverktyg utvecklats för analys av onkologisk data, för att gripa sig an läkares repetitiva vardagssysslor. Den här avhandlingen syftar att bidra till fältet “onkologisk bildanalys” genom att föreslå nya sätt att kvantifiera tumörers egenskaper från medicinsk bilddata. Specifikt, är denna avhandling baserad på sex artiklar där de första två har fokus att presentera nya metoder för segmentering av tumörer, och de resterande fyra ämnar att utveckla kvantitativa biomarkörer för cancerdiagnostik och prognos. Huvudsyftet för “Studie I” har varit att utveckla en djupinlärnings-pipeline vars syfte är att fånga lungpatalogiers anatomier (inklusive lungtumörer) samt integrera detta med djupa neurala nätverk för segmentering för att nyttja det första nätverkets utfall för att förbättra segmenteringskvalitén. Den föreslagna pipelinen testades på flertalet dataset och numeriska analyser visar en, QC 2022-08-29
- Published
- 2022
12. Prior-aware autoencoders for lung pathology segmentation
- Author
-
Astaraki, Mehdi, Smedby, Örjan, Wang, Chunliang, Astaraki, Mehdi, Smedby, Örjan, and Wang, Chunliang
- Abstract
Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion seg-mentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and re-construct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information re-garding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On av-erage, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model pro-duces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations., QC 20220627
- Published
- 2022
- Full Text
- View/download PDF
13. Advanced Machine Learning Methods for Oncological Image Analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the pr, Cancer är en global hälsoutmaning som uppskattas ansvara för cirka 10 miljoner dödsfall i hela världen, bara under året 2020. Framsteg inom medicinsk bildtagning och hårdvaruutveckling de senaste tre decennierna har banat vägen för moderna medicinska bildgivande system vars upplösningsförmåga tillåter att fånga information om tumörers anatomi, fysiologi, funktion samt metabolism. Medicinsk bildanalys har därför fått en mer betydelserik roll i klinikers dagliga rutiner inom onkologin, för bland annat screening, diagnostik, uppföljning av behandling samt icke-invasiv utvärdering av sjukdomsprognoser. Sjukvårdens behov av medicinska bilder har lett till att det nu på sjukhusen finns en enorm mängd medicinska bilder på alla moderna sjukhus. Med hänsyn till den viktiga roll medicinsk bilddata spelar i dagens sjukvård, samt den mängd manuellt arbete som behöver göras för att analysera den mängd data som genereras varje dag, så har utvecklingen av digitala verktyg för att för att automatiskt eller semi-automatiskt analysera bilddatan alltid haft stort intresse. Därför har en rad maskininlärningsverktyg utvecklats för analys av onkologisk data, för att gripa sig an läkares repetitiva vardagssysslor. Den här avhandlingen syftar att bidra till fältet “onkologisk bildanalys” genom att föreslå nya sätt att kvantifiera tumörers egenskaper från medicinsk bilddata. Specifikt, är denna avhandling baserad på sex artiklar där de första två har fokus att presentera nya metoder för segmentering av tumörer, och de resterande fyra ämnar att utveckla kvantitativa biomarkörer för cancerdiagnostik och prognos. Huvudsyftet för “Studie I” har varit att utveckla en djupinlärnings-pipeline vars syfte är att fånga lungpatalogiers anatomier (inklusive lungtumörer) samt integrera detta med djupa neurala nätverk för segmentering för att nyttja det första nätverkets utfall för att förbättra segmenteringskvalitén. Den föreslagna pipelinen testades på flertalet dataset och numeriska analyser visar en, QC 2022-08-29
- Published
- 2022
14. Spherical Convolutional Neural Networks for Survival Rate Prediction in Cancer Patients
- Author
-
Sinzinger, Fabian, Astaraki, Mehdi, Smedby, Örjan, Moreno, Rodrigo, Sinzinger, Fabian, Astaraki, Mehdi, Smedby, Örjan, and Moreno, Rodrigo
- Abstract
ObjectiveSurvival Rate Prediction (SRP) is a valuable tool to assist in the clinical diagnosis and treatment planning of lung cancer patients. In recent years, deep learning (DL) based methods have shown great potential in medical image processing in general and SRP in particular. This study proposes a fully-automated method for SRP from computed tomography (CT) images, which combines an automatic segmentation of the tumor and a DL-based method for extracting rotational-invariant features. MethodsIn the first stage, the tumor is segmented from the CT image of the lungs. Here, we use a deep-learning-based method that entails a variational autoencoder to provide more information to a U-Net segmentation model. Next, the 3D volumetric image of the tumor is projected onto 2D spherical maps. These spherical maps serve as inputs for a spherical convolutional neural network that approximates the log risk for a generalized Cox proportional hazard model. ResultsThe proposed method is compared with 17 baseline methods that combine different feature sets and prediction models using three publicly-available datasets: Lung1 (n=422), Lung3 (n=89), and H&N1 (n=136). We observed comparable C-index scores compared to the best-performing baseline methods in a 5-fold cross-validation on Lung1 (0.59 +/- 0.03 vs. 0.62 +/- 0.04). In comparison, it slightly outperforms all methods in inter-data set evaluation (0.64 vs. 0.63). The best-performing method from the first experiment reduced its performance to 0.61 and 0.62 for Lung3 and H&N1, respectively. DiscussionThe experiments suggest that the performance of spherical features is comparable with previous approaches, but they generalize better when applied to unseen datasets. That might imply that orientation-independent shape features are relevant for SRP. The performance of the proposed method was very similar, using manual and automatic segmentation methods. This makes the proposed model useful in cases where expert annotations a, QC 20220601
- Published
- 2022
- Full Text
- View/download PDF
15. CTV delineation for high-grade gliomas : Is there agreement with tumor cell invasion models?
- Author
-
Hager, W., Toma-Dasu, I., Lazzeroni, M., Astaraki, Mehdi, Hager, W., Toma-Dasu, I., Lazzeroni, M., and Astaraki, Mehdi
- Abstract
Not duplicate with DiVA 1679193QC 20220721
- Published
- 2022
16. CTV Delineation for High-Grade Gliomas : Is There Agreement With Tumor Cell Invasion Models?
- Author
-
Hager, Wille, Lazzeroni, Marta, Astaraki, Mehdi, Toma-Dasu, Luliana, Hager, Wille, Lazzeroni, Marta, Astaraki, Mehdi, and Toma-Dasu, Luliana
- Abstract
Purpose: High-grade glioma (HGG) is a common form of malignant primary brain cancer with poor prognosis. The diffusive nature of HGGs implies that tumor cell invasion of normal tissue extends several centimeters away from the visible gross tumor volume (GTV). The standard methodology for clinical volume target (CTV) delineation is to apply a 2-to 3-cm margin around the GTV. However, tumor recurrence is extremely frequent. The purpose of this paper was to introduce a framework and computational model for the prediction of normal tissue HGG cell invasion and to investigate the agreement of the conventional CTV delineation with respect to the predicted tumor invasion. Methods and Materials: A model for HGG cell diffusion and proliferation was implemented and used to assess the tumor invasion patterns for 112 cases of HGGs. Normal brain structures and tissues as well as the GTVs visible on diagnostic images were delineated using automated methods. The volumes encompassed by different tumor cell concentration isolines calculated using the model for invasion were compared with the conventionally delineated CTVs, and the differences were analyzed. The 3-dimensional-Hausdorff distance between the CTV and the volumes encompassed by various isolines was also calculated. Results: In 50% of cases, the CTV failed to encompass regions containing tumor cell concentrations of 614 cells/mm3 or greater. In 84% of cases, the lowest cell concentration completely encompassed by the CTV was & GE;1 cell/mm3. In the remaining 16%, the CTV overextended into normal tissue. The Hausdorff distance was on average comparable to the CTV margin. Conclusions: The standard methodology for CTV delineation appears to be inconsistent with HGG invasion patterns in terms of size and shape. Tumor invasion modeling could therefore be useful in assisting in the CTV delineation for HGGs., QC 20220630
- Published
- 2022
- Full Text
- View/download PDF
17. Advanced machine learning methods for oncological image analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the p
- Published
- 2022
18. PriorNet: lesion segmentation in PET-CT including prior tumor appearance information
- Author
-
Bendazzoli, Simone, Astaraki, Mehdi, Bendazzoli, Simone, and Astaraki, Mehdi
- Abstract
Tumor segmentation in PET-CT images is challenging due to the dual nature of the acquired information: low metabolic information in CT and low spatial resolution in PET. U-Net architecture is the most common and widely recognized approach when developing a fully automatic image segmentation method in the medical field. We proposed a two-step approach, aiming to refine and improve the segmentation performances of tumoral lesions in PET-CT. The first step generates a prior tumor appearance map from the PET-CT volumes, regarded as prior tumor information. The second step, consisting of a standard U-Net, receives the prior tumor appearance map and PET-CT images to generate the lesion mask. We evaluated the method on the 1014 cases available for the AutoPET 2022 challenge, and the results showed an average Dice score of 0.701 on the positive cases.
- Published
- 2022
19. Advanced Machine Learning Methods for Oncological Image Analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the pr, Cancer är en global hälsoutmaning som uppskattas ansvara för cirka 10 miljoner dödsfall i hela världen, bara under året 2020. Framsteg inom medicinsk bildtagning och hårdvaruutveckling de senaste tre decennierna har banat vägen för moderna medicinska bildgivande system vars upplösningsförmåga tillåter att fånga information om tumörers anatomi, fysiologi, funktion samt metabolism. Medicinsk bildanalys har därför fått en mer betydelserik roll i klinikers dagliga rutiner inom onkologin, för bland annat screening, diagnostik, uppföljning av behandling samt icke-invasiv utvärdering av sjukdomsprognoser. Sjukvårdens behov av medicinska bilder har lett till att det nu på sjukhusen finns en enorm mängd medicinska bilder på alla moderna sjukhus. Med hänsyn till den viktiga roll medicinsk bilddata spelar i dagens sjukvård, samt den mängd manuellt arbete som behöver göras för att analysera den mängd data som genereras varje dag, så har utvecklingen av digitala verktyg för att för att automatiskt eller semi-automatiskt analysera bilddatan alltid haft stort intresse. Därför har en rad maskininlärningsverktyg utvecklats för analys av onkologisk data, för att gripa sig an läkares repetitiva vardagssysslor. Den här avhandlingen syftar att bidra till fältet “onkologisk bildanalys” genom att föreslå nya sätt att kvantifiera tumörers egenskaper från medicinsk bilddata. Specifikt, är denna avhandling baserad på sex artiklar där de första två har fokus att presentera nya metoder för segmentering av tumörer, och de resterande fyra ämnar att utveckla kvantitativa biomarkörer för cancerdiagnostik och prognos. Huvudsyftet för “Studie I” har varit att utveckla en djupinlärnings-pipeline vars syfte är att fånga lungpatalogiers anatomier (inklusive lungtumörer) samt integrera detta med djupa neurala nätverk för segmentering för att nyttja det första nätverkets utfall för att förbättra segmenteringskvalitén. Den föreslagna pipelinen testades på flertalet dataset och numeriska analyser visar en, QC 2022-08-29
- Published
- 2022
20. Advanced Machine Learning Methods for Oncological Image Analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the pr, Cancer är en global hälsoutmaning som uppskattas ansvara för cirka 10 miljoner dödsfall i hela världen, bara under året 2020. Framsteg inom medicinsk bildtagning och hårdvaruutveckling de senaste tre decennierna har banat vägen för moderna medicinska bildgivande system vars upplösningsförmåga tillåter att fånga information om tumörers anatomi, fysiologi, funktion samt metabolism. Medicinsk bildanalys har därför fått en mer betydelserik roll i klinikers dagliga rutiner inom onkologin, för bland annat screening, diagnostik, uppföljning av behandling samt icke-invasiv utvärdering av sjukdomsprognoser. Sjukvårdens behov av medicinska bilder har lett till att det nu på sjukhusen finns en enorm mängd medicinska bilder på alla moderna sjukhus. Med hänsyn till den viktiga roll medicinsk bilddata spelar i dagens sjukvård, samt den mängd manuellt arbete som behöver göras för att analysera den mängd data som genereras varje dag, så har utvecklingen av digitala verktyg för att för att automatiskt eller semi-automatiskt analysera bilddatan alltid haft stort intresse. Därför har en rad maskininlärningsverktyg utvecklats för analys av onkologisk data, för att gripa sig an läkares repetitiva vardagssysslor. Den här avhandlingen syftar att bidra till fältet “onkologisk bildanalys” genom att föreslå nya sätt att kvantifiera tumörers egenskaper från medicinsk bilddata. Specifikt, är denna avhandling baserad på sex artiklar där de första två har fokus att presentera nya metoder för segmentering av tumörer, och de resterande fyra ämnar att utveckla kvantitativa biomarkörer för cancerdiagnostik och prognos. Huvudsyftet för “Studie I” har varit att utveckla en djupinlärnings-pipeline vars syfte är att fånga lungpatalogiers anatomier (inklusive lungtumörer) samt integrera detta med djupa neurala nätverk för segmentering för att nyttja det första nätverkets utfall för att förbättra segmenteringskvalitén. Den föreslagna pipelinen testades på flertalet dataset och numeriska analyser visar en, QC 2022-08-29
- Published
- 2022
21. Advanced Machine Learning Methods for Oncological Image Analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the pr, Cancer är en global hälsoutmaning som uppskattas ansvara för cirka 10 miljoner dödsfall i hela världen, bara under året 2020. Framsteg inom medicinsk bildtagning och hårdvaruutveckling de senaste tre decennierna har banat vägen för moderna medicinska bildgivande system vars upplösningsförmåga tillåter att fånga information om tumörers anatomi, fysiologi, funktion samt metabolism. Medicinsk bildanalys har därför fått en mer betydelserik roll i klinikers dagliga rutiner inom onkologin, för bland annat screening, diagnostik, uppföljning av behandling samt icke-invasiv utvärdering av sjukdomsprognoser. Sjukvårdens behov av medicinska bilder har lett till att det nu på sjukhusen finns en enorm mängd medicinska bilder på alla moderna sjukhus. Med hänsyn till den viktiga roll medicinsk bilddata spelar i dagens sjukvård, samt den mängd manuellt arbete som behöver göras för att analysera den mängd data som genereras varje dag, så har utvecklingen av digitala verktyg för att för att automatiskt eller semi-automatiskt analysera bilddatan alltid haft stort intresse. Därför har en rad maskininlärningsverktyg utvecklats för analys av onkologisk data, för att gripa sig an läkares repetitiva vardagssysslor. Den här avhandlingen syftar att bidra till fältet “onkologisk bildanalys” genom att föreslå nya sätt att kvantifiera tumörers egenskaper från medicinsk bilddata. Specifikt, är denna avhandling baserad på sex artiklar där de första två har fokus att presentera nya metoder för segmentering av tumörer, och de resterande fyra ämnar att utveckla kvantitativa biomarkörer för cancerdiagnostik och prognos. Huvudsyftet för “Studie I” har varit att utveckla en djupinlärnings-pipeline vars syfte är att fånga lungpatalogiers anatomier (inklusive lungtumörer) samt integrera detta med djupa neurala nätverk för segmentering för att nyttja det första nätverkets utfall för att förbättra segmenteringskvalitén. Den föreslagna pipelinen testades på flertalet dataset och numeriska analyser visar en, QC 2022-08-29
- Published
- 2022
22. Advanced machine learning methods for oncological image analysis
- Author
-
Astaraki, Mehdi and Astaraki, Mehdi
- Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the p
- Published
- 2022
23. A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images
- Author
-
Astaraki, Mehdi, Yang, Guang, Zakko, Yousuf, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Astaraki, Mehdi, Yang, Guang, Zakko, Yousuf, Toma-Dasu, Iuliana, Smedby, Örjan, and Wang, Chunliang
- Abstract
ObjectivesBoth radiomics and deep learning methods have shown great promise in predicting lesion malignancy in various image-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access to the same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventional radiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancy prediction on an open database that consists of 1297 manually delineated lung nodules. MethodsConventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images. Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lung nodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selection and class balancing, as well as separating the features learned in the nodule target region and the background/context region. By pooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two sets with respect to malignancy prediction. ResultsThe best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achieved AUROC values (mean +/- standard deviations) of 0.792 +/- 0.025, 0.801 +/- 0.018, and 0.817 +/- 0.032, respectively through 5-fold cross-validation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, as well as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based models achieved AUROC values of 0.921 +/- 0.010, 0.824 +/- 0.021, and 0.936 +/- 0.011, respectively. We achieved the best prediction accuracy from the hybrid feature set (AUROC: 0.938 +/- 0.010). ConclusionThe end-to-end dee, QC 20220124
- Published
- 2021
- Full Text
- View/download PDF
24. diffusion model based target definition for high-grade gliomas : robustness analysis
- Author
-
Hager, W., Lazzeroni, M., Astaraki, Mehdi, Toma-Dasu, I., Hager, W., Lazzeroni, M., Astaraki, Mehdi, and Toma-Dasu, I.
- Abstract
QC 20211201
- Published
- 2021
25. A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images
- Author
-
Astaraki, Mehdi, Yang, Guang, Zakko, Yousuf, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Astaraki, Mehdi, Yang, Guang, Zakko, Yousuf, Toma-Dasu, Iuliana, Smedby, Örjan, and Wang, Chunliang
- Abstract
Objectives: Both radiomics and deep learning methods have shown great promise in predicting lesion malignancy in various image-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access to the same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventional radiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancy prediction on an open database that consists of 1297 manually delineated lung nodules. Methods: Conventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images. Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lung nodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selection and class balancing, as well as separating the features learned in the nodule target region and the background/context region. By pooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two sets with respect to malignancy prediction. Results: The best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achieved AUROC values (mean ± standard deviations) of 0.792 ± 0.025, 0.801 ± 0.018, and 0.817 ± 0.032, respectively through 5-fold cross-validation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, as well as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based models achieved AUROC values of 0.921 ± 0.010, 0.824 ± 0.021, and 0.936 ± 0.011, respectively. We achieved the best prediction accuracy from the hybrid feature set (AUROC: 0.938 ± 0.010). Conclusion: The end-to-end deep-learni
- Published
- 2021
- Full Text
- View/download PDF
26. Benign-malignant pulmonary nodule classification in low-dose CT with convolutional features
- Author
-
Astaraki, Mehdi, Zakko, Yousuf, Dasu, Iuliana Toma, Smedby, Örjan, Wang, Chunliang, Astaraki, Mehdi, Zakko, Yousuf, Dasu, Iuliana Toma, Smedby, Örjan, and Wang, Chunliang
- Abstract
Purpose: Low-Dose Computed Tomography (LDCT) is the most common imaging modality for lung cancer diagnosis. The presence of nodules in the scans does not necessarily portend lung cancer, as there is an intricate relationship between nodule characteristics and lung cancer. Therefore, benign-malignant pulmonary nodule classification at early detection is a crucial step to improve diagnosis and prolong patient survival. The aim of this study is to propose a method for predicting nodule malignancy based on deep abstract features. Methods: To efficiently capture both intra-nodule heterogeneities and contextual information of the pulmonary nodules, a dual pathway model was developed to integrate the intra-nodule characteristics with contextual attributes. The proposed approach was implemented with both supervised and unsupervised learning schemes. A random forest model was added as a second component on top of the networks to generate the classification results. The discrimination power of the model was evaluated by calculating the Area Under the Receiver Operating Characteristic Curve (AUROC) metric. Results: Experiments on 1297 manually segmented nodules show that the integration of context and target supervised deep features have a great potential for accurate prediction, resulting in a discrimination power of 0.936 in terms of AUROC, which outperformed the classification performance of the Kaggle 2017 challenge winner. Conclusion: Empirical results demonstrate that integrating nodule target and context images into a unified network improves the discrimination power, outperforming the conventional single pathway convolutional neural networks., QC 20210720
- Published
- 2021
- Full Text
- View/download PDF
27. Benign-malignant pulmonary nodule classification in low-dose CT with convolutional features
- Author
-
Astaraki, Mehdi, Zakko, Yousuf, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Astaraki, Mehdi, Zakko, Yousuf, Toma-Dasu, Iuliana, Smedby, Örjan, and Wang, Chunliang
- Abstract
Purpose: Low-Dose Computed Tomography (LDCT) is the most common imaging modality for lung cancer diagnosis. The presence of nodules in the scans does not necessarily portend lung cancer, as there is an intricate relationship between nodule characteristics and lung cancer. Therefore, benign-malignant pulmonary nodule classification at early detection is a crucial step to improve diagnosis and prolong patient survival. The aim of this study is to propose a method for predicting nodule malignancy based on deep abstract features. Methods: To efficiently capture both intra-nodule heterogeneities and contextual information of the pulmonary nodules, a dual pathway model was developed to integrate the intra-nodule characteristics with contextual attributes. The proposed approach was implemented with both supervised and unsupervised learning schemes. A random forest model was added as a second component on top of the networks to generate the classification results. The discrimination power of the model was evaluated by calculating the Area Under the Receiver Operating Characteristic Curve (AUROC) metric. Results: Experiments on 1297 manually segmented nodules show that the integration of context and target supervised deep features have a great potential for accurate prediction, resulting in a discrimination power of 0.936 in terms of AUROC, which outperformed the classification performance of the Kaggle 2017 challenge winner. Conclusion: Empirical results demonstrate that integrating nodule target and context images into a unified network improves the discrimination power, outperforming the conventional single pathway convolutional neural networks.
- Published
- 2021
- Full Text
- View/download PDF
28. Multimodal Brain Tumor Segmentation with Normal Appearance Autoencoder
- Author
-
Astaraki, Mehdi, Wang, Chunliang, Carrizo, Gabriel, Toma-Dasu, Iuliana, Smedby, Örjan, Astaraki, Mehdi, Wang, Chunliang, Carrizo, Gabriel, Toma-Dasu, Iuliana, and Smedby, Örjan
- Abstract
We propose a hybrid segmentation pipeline based on the autoencoders’ capability of anomaly detection. To this end, we, first, introduce a new augmentation technique to generate synthetic paired images. Gaining advantage from the paired images, we propose a Normal Appearance Autoencoder (NAA) that is able to remove tumors and thus reconstruct realistic-looking, tumor-free images. After estimating the regions where the abnormalities potentially exist, a segmentation network is guided toward the candidate region. We tested the proposed pipeline on the BraTS 2019 database. The preliminary results indicate that the proposed model improved the segmentation accuracy of brain tumor subregions compared to the U-Net model.
- Published
- 2020
- Full Text
- View/download PDF
29. Development and evaluation of a 3D annotation software for interactive COVID-19 lesion segmentation in chest CT
- Author
-
Bendazzoli, Simone, Brusini, Irene, Astaraki, Mehdi, Persson, Mats, Yu, Jimmy, Connolly, Bryan, Nyrén, Sven, Strand, Fredrik, Smedby, Örjan, Wang, Chunliang, Bendazzoli, Simone, Brusini, Irene, Astaraki, Mehdi, Persson, Mats, Yu, Jimmy, Connolly, Bryan, Nyrén, Sven, Strand, Fredrik, Smedby, Örjan, and Wang, Chunliang
- Abstract
QCR 20210802
- Published
- 2020
30. Model based target definition for high-grade gliomas : implementation and sensitivity analysis
- Author
-
Hager, W., Lazzeroni, M., Astaraki, Mehdi, Toma-Dasu, I., Hager, W., Lazzeroni, M., Astaraki, Mehdi, and Toma-Dasu, I.
- Abstract
QC 20210630
- Published
- 2020
31. PO-1558 Model based target definition for high-grade gliomas: implementation and sensitivity analysis
- Author
-
Häger, Wille, Lazzeroni, Marta, Astaraki, Mehdi, Toma-Dasu, Iuliana, Häger, Wille, Lazzeroni, Marta, Astaraki, Mehdi, and Toma-Dasu, Iuliana
- Abstract
QC 20210811
- Published
- 2020
- Full Text
- View/download PDF
32. Normal Appearance Autoencoder for Lung Cancer Detection and Segmentation
- Author
-
Astaraki, Mehdi, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Astaraki, Mehdi, Toma-Dasu, Iuliana, Smedby, Örjan, and Wang, Chunliang
- Abstract
One of the major differences between medical doctor training and machine learning is that doctors are trained to recognize normal/healthy anatomy first. Knowing the healthy appearance of anatomy structures helps doctors to make better judgement when some abnormality shows up in an image. In this study, we propose a normal appearance autoencoder (NAA), that removes abnormalities from a diseased image. This autoencoder is semi-automatically trained using another partial convolutional in-paint network that is trained using healthy subjects only. The output of the autoencoder is then fed to a segmentation net in addition to the original input image, i.e. the latter gets both the diseased image and a simulated healthy image where the lesion is artificially removed. By getting access to knowledge of how the abnormal region is supposed to look, we hypothesized that the segmentation network could perform better than just being shown the original slice. We tested the proposed network on the LIDC-IDRI dataset for lung cancer detection and segmentation. The preliminary results show the NAA approach improved segmentation accuracy substantially in comparison with the conventional U-Net architecture.
- Published
- 2019
- Full Text
- View/download PDF
33. Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method
- Author
-
Astaraki, Mehdi, Wang, Chunliang, Buizza, Giulia, Toma-Dasu, Iuliana, Lazzeroni, Marta, Smedby, Örjan, Astaraki, Mehdi, Wang, Chunliang, Buizza, Giulia, Toma-Dasu, Iuliana, Lazzeroni, Marta, and Smedby, Örjan
- Abstract
Purpose To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. Methods Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). Results The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROCSALoP = 0.90 vs. AUROCradiomic = 0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. Conclusion A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.
- Published
- 2019
- Full Text
- View/download PDF
34. Multimodal brain tumor segmentation with normal appearance autoencoder
- Author
-
Astaraki, Mehdi, Wang, Chunliang, Carrizo, Garrizo, Toma-Dasu, Iuliana, Smedby, Örjan, Astaraki, Mehdi, Wang, Chunliang, Carrizo, Garrizo, Toma-Dasu, Iuliana, and Smedby, Örjan
- Abstract
We propose a hybrid segmentation pipeline based on the autoencoders’ capability of anomaly detection. To this end, we, first, introduce a new augmentation technique to generate synthetic paired images. Gaining advantage from the paired images, we propose a Normal Appearance Autoencoder (NAA) that is able to remove tumors and thus reconstruct realistic-looking, tumor-free images. After estimating the regions where the abnormalities potentially exist, a segmentation network is guided toward the candidate region. We tested the proposed pipeline on the BraTS 2019 database. The preliminary results indicate that the proposed model improved the segmentation accuracy of brain tumor subregions compared to the U-Net model., QC 20210622
- Published
- 2019
- Full Text
- View/download PDF
35. Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method
- Author
-
Astaraki, Mehdi, Wang, Chunliang, Buizza, Giulia, Toma-Dasu, Iuliana, Lazzeroni, Marta, Smedby, Örjan, Astaraki, Mehdi, Wang, Chunliang, Buizza, Giulia, Toma-Dasu, Iuliana, Lazzeroni, Marta, and Smedby, Örjan
- Abstract
QC 20220405
- Published
- 2019
- Full Text
- View/download PDF
36. OC-0406 Early survival prediction in non-small cell lung cancer with PET/CT size aware longitudinal pattern
- Author
-
Astaraki, Mehdi, Wang, Chunliang, Buizza, Giulia, Toma-Dasu, Iuliana, Lazzeroni, Marta, Smedby, Örjan, Astaraki, Mehdi, Wang, Chunliang, Buizza, Giulia, Toma-Dasu, Iuliana, Lazzeroni, Marta, and Smedby, Örjan
- Abstract
QC 20220503
- Published
- 2019
- Full Text
- View/download PDF
37. Normal appearance autoencoder for lung cancer detection and segmentation
- Author
-
Astaraki, Mehdi, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Astaraki, Mehdi, Toma-Dasu, Iuliana, Smedby, Örjan, and Wang, Chunliang
- Abstract
One of the major differences between medical doctor training and machine learning is that doctors are trained to recognize normal/healthy anatomy first. Knowing the healthy appearance of anatomy structures helps doctors to make better judgement when some abnormality shows up in an image. In this study, we propose a normal appearance autoencoder (NAA), that removes abnormalities from a diseased image. This autoencoder is semi-automatically trained using another partial convolutional in-paint network that is trained using healthy subjects only. The output of the autoencoder is then fed to a segmentation net in addition to the original input image, i.e. the latter gets both the diseased image and a simulated healthy image where the lesion is artificially removed. By getting access to knowledge of how the abnormal region is supposed to look, we hypothesized that the segmentation network could perform better than just being shown the original slice. We tested the proposed network on the LIDC-IDRI dataset for lung cancer detection and segmentation. The preliminary results show the NAA approach improved segmentation accuracy substantially in comparison with the conventional U-Net architecture., QC 20210622
- Published
- 2019
- Full Text
- View/download PDF
38. Evaluation of localized region-based segmentation algorithms for CT-based delineation of organs at risk in radiotherapy
- Author
-
Astaraki, Mehdi, Severgnini, Mara, Milan, Vittorino, Schiattarella, Anna, Ciriello, Francesca, de Denaro, Mario, Beorchia, Aulo, Aslian, Hossein, Astaraki, Mehdi, Severgnini, Mara, Milan, Vittorino, Schiattarella, Anna, Ciriello, Francesca, de Denaro, Mario, Beorchia, Aulo, and Aslian, Hossein
- Abstract
QC 20220503
- Published
- 2018
- Full Text
- View/download PDF
39. A modified fast local region based method for image segmentation
- Author
-
Astaraki, Mehdi, Aslian, Hossein, Hamedi, Mahyar, Astaraki, Mehdi, Aslian, Hossein, and Hamedi, Mahyar
- Abstract
QC 20210622
- Published
- 2015
- Full Text
- View/download PDF
40. Brain tumor target volume segmentation : local region based approach
- Author
-
Astaraki, Mehdi, Aslian, Hossein, Astaraki, Mehdi, and Aslian, Hossein
- Abstract
QC 20210622
- Published
- 2015
41. Unsupervised Tumor Segmentation
- Author
-
Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, Wendler, Thomas, Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, and Wendler, Thomas
- Abstract
QC 20220829
42. Unsupervised Tumor Segmentation
- Author
-
Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, Wendler, Thomas, Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, and Wendler, Thomas
- Abstract
QC 20220829
43. Unsupervised Tumor Segmentation
- Author
-
Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, Wendler, Thomas, Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, and Wendler, Thomas
- Abstract
QC 20220829
44. Unsupervised Tumor Segmentation
- Author
-
Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, Wendler, Thomas, Astaraki, Mehdi, De Benetti, Francesca, Yeganeh, Yousef, Toma-Dasu, Iuliana, Smedby, Örjan, Wang, Chunliang, Navab, Nassir, and Wendler, Thomas
- Abstract
QC 20220829
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.