8 results on '"Shahriar Faghani"'
Search Results
2. Patient-specific Hip Arthroplasty Dislocation Risk Calculator: An Explainable Multimodal Machine Learning–based Approach
- Author
-
Bardia Khosravi, Pouria Rouzrokh, Hilal Maradit Kremers, Dirk R. Larson, Quinn J. Johnson, Shahriar Faghani, Walter K. Kremers, Bradley J. Erickson, Rafael J. Sierra, Michael J. Taunton, and Cody C. Wyles
- Subjects
Radiological and Ultrasound Technology ,Artificial Intelligence ,Radiology, Nuclear Medicine and imaging ,Original Research - Abstract
PURPOSE: To develop a multimodal machine learning–based pipeline to predict patient-specific risk of dislocation following primary total hip arthroplasty (THA). MATERIALS AND METHODS: This study retrospectively evaluated 17 073 patients who underwent primary THA between 1998 and 2018. A test set of 1718 patients was held out. A hybrid network of EfficientNet-B4 and Swin-B transformer was developed to classify patients according to 5-year dislocation outcomes from preoperative anteroposterior pelvic radiographs and clinical characteristics (demographics, comorbidities, and surgical characteristics). The most informative imaging features, extracted by the mentioned model, were selected and concatenated with clinical features. A collection of these features was then used to train a multimodal survival XGBoost model to predict the individualized hazard of dislocation within 5 years. C index was used to evaluate the multimodal survival model on the test set and compare it with another clinical-only model trained only on clinical data. Shapley additive explanation values were used for model explanation. RESULTS: The study sample had a median age of 65 years (IQR: 18 years; 52.1% [8889] women) with a 5-year dislocation incidence of 2%. On the holdout test set, the clinical-only model achieved a C index of 0.64 (95% CI: 0.60, 0.68). The addition of imaging features boosted multimodal model performance to a C index of 0.74 (95% CI: 0.69, 0.78; P = .02). CONCLUSION: Due to its discrimination ability and explainability, this risk calculator can be a potential powerful dislocation risk stratification and THA planning tool. Keywords: Conventional Radiography, Surgery, Skeletal-Appendicular, Hip, Outcomes Analysis, Supervised Learning, Convolutional Neural Network (CNN), Gradient Boosting Machines (GBM) Supplemental material is available for this article. © RSNA, 2022
- Published
- 2022
3. Mitigating Bias in Radiology Machine Learning: 1. Data Handling
- Author
-
Pouria Rouzrokh, Bardia Khosravi, Shahriar Faghani, Mana Moassefi, Diana V. Vera Garcia, Yashbir Singh, Kuan Zhang, Gian Marco Conte, and Bradley J. Erickson
- Subjects
Radiological and Ultrasound Technology ,Artificial Intelligence ,Radiology, Nuclear Medicine and imaging ,Special Report - Abstract
Minimizing bias is critical to adoption and implementation of machine learning (ML) in clinical practice. Systematic mathematical biases produce consistent and reproducible differences between the observed and expected performance of ML systems, resulting in suboptimal performance. Such biases can be traced back to various phases of ML development: data handling, model development, and performance evaluation. This report presents 12 suboptimal practices during data handling of an ML study, explains how those practices can lead to biases, and describes what may be done to mitigate them. Authors employ an arbitrary and simplified framework that splits ML data handling into four steps: data collection, data investigation, data splitting, and feature engineering. Examples from the available research literature are provided. A Google Colaboratory Jupyter notebook includes code examples to demonstrate the suboptimal practices and steps to prevent them. Keywords: Data Handling, Bias, Machine Learning, Deep Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) © RSNA, 2022
- Published
- 2022
- Full Text
- View/download PDF
4. Mitigating Bias in Radiology Machine Learning: 3. Performance Metrics
- Author
-
Shahriar Faghani, Bardia Khosravi, Kuan Zhang, Mana Moassefi, Jaidip Manikrao Jagtap, Fred Nugen, Sanaz Vahdati, Shiba P. Kuanar, Seyed Moein Rassoulinejad-Mousavi, Yashbir Singh, Diana V. Vera Garcia, Pouria Rouzrokh, and Bradley J. Erickson
- Subjects
Radiological and Ultrasound Technology ,Artificial Intelligence ,Radiology, Nuclear Medicine and imaging ,Special Report - Abstract
The increasing use of machine learning (ML) algorithms in clinical settings raises concerns about bias in ML models. Bias can arise at any step of ML creation, including data handling, model development, and performance evaluation. Potential biases in the ML model can be minimized by implementing these steps correctly. This report focuses on performance evaluation and discusses model fitness, as well as a set of performance evaluation toolboxes: namely, performance metrics, performance interpretation maps, and uncertainty quantification. By discussing the strengths and limitations of each toolbox, our report highlights strategies and considerations to mitigate and detect biases during performance evaluations of radiology artificial intelligence models. Keywords: Segmentation, Diagnosis, Convolutional Neural Network (CNN) © RSNA, 2022
- Published
- 2022
5. A deep learning algorithm for detecting lytic bone lesions of multiple myeloma on CT
- Author
-
Shahriar Faghani, Francis I. Baffour, Michael D. Ringler, Matthew Hamilton-Cave, Pouria Rouzrokh, Mana Moassefi, Bardia Khosravi, and Bradley J. Erickson
- Subjects
Deep Learning ,Humans ,Radiology, Nuclear Medicine and imaging ,Osteolysis ,Multiple Myeloma ,Tomography, X-Ray Computed ,Algorithms - Abstract
Whole-body low-dose CT is the recommended initial imaging modality to evaluate bone destruction as a result of multiple myeloma. Accurate interpretation of these scans to detect small lytic bone lesions is time intensive. A functional deep learning) algorithm to detect lytic lesions on CTs could improve the value of these CTs for myeloma imaging. Our objectives were to develop a DL algorithm and determine its performance at detecting lytic lesions of multiple myeloma.Axial slices (2-mm section thickness) from whole-body low-dose CT scans of subjects with biochemically confirmed plasma cell dyscrasias were included in the study. Data were split into train and test sets at the patient level targeting a 90%/10% split. Two musculoskeletal radiologists annotated lytic lesions on the images with bounding boxes. Subsequently, we developed a two-step deep learning model comprising bone segmentation followed by lesion detection. Unet and "You Look Only Once" (YOLO) models were used as bone segmentation and lesion detection algorithms, respectively. Diagnostic performance was determined using the area under the receiver operating characteristic curve (AUROC).Forty whole-body low-dose CTs from 40 subjects yielded 2193 image slices. A total of 5640 lytic lesions were annotated. The two-step model achieved a sensitivity of 91.6% and a specificity of 84.6%. Lesion detection AUROC was 90.4%.We developed a deep learning model that detects lytic bone lesions of multiple myeloma on whole-body low-dose CTs with high performance. External validation is required prior to widespread adoption in clinical practice.
- Published
- 2022
6. A Comparison of Three Different Deep Learning-Based Models to Predict the MGMT Promoter Methylation Status in Glioblastoma Using Brain MRI
- Author
-
Shahriar Faghani, Bardia Khosravi, Mana Moassefi, Gian Marco Conte, and Bradley J. Erickson
- Subjects
Radiological and Ultrasound Technology ,Radiology, Nuclear Medicine and imaging ,Computer Science Applications - Abstract
Glioblastoma (GBM) is the most common primary malignant brain tumor in adults. The standard treatment for GBM consists of surgical resection followed by concurrent chemoradiotherapy and adjuvant temozolomide. O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Considering the complications of the tissue-based methods, an imaging-based approach is preferred. This study aimed to compare three different deep learning-based approaches for predicting MGMT promoter methylation status. We obtained 576 T2WI with their corresponding tumor masks, and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32 × 32 × 32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction, then for final prediction, we used majority voting. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, and voxel-wise, accuracy was 65.42% (SD 3.97%), 61.37% (SD 1.48%), and 56.84% (SD 4.38%), respectively.
- Published
- 2022
7. Mitigating Bias in Radiology Machine Learning: 2. Model Development
- Author
-
Kuan Zhang, Bardia Khosravi, Sanaz Vahdati, Shahriar Faghani, Fred Nugen, Seyed Moein Rassoulinejad-Mousavi, Mana Moassefi, Jaidip Manikrao M. Jagtap, Yashbir Singh, Pouria Rouzrokh, and Bradley J. Erickson
- Subjects
Radiological and Ultrasound Technology ,Artificial Intelligence ,Radiology, Nuclear Medicine and imaging ,Special Report - Abstract
There are increasing concerns about the bias and fairness of artificial intelligence (AI) models as they are put into clinical practice. Among the steps for implementing machine learning tools into clinical workflow, model development is an important stage where different types of biases can occur. This report focuses on four aspects of model development where such bias may arise: data augmentation, model and loss function, optimizers, and transfer learning. This report emphasizes appropriate considerations and practices that can mitigate biases in radiology AI studies. Keywords: Model, Bias, Machine Learning, Deep Learning, Radiology © RSNA, 2022
- Published
- 2022
8. Development of a deep learning model for the histologic diagnosis of dysplasia in Barrett’s esophagus
- Author
-
Shahriar Faghani, D. Chamil Codipilly, null David Vogelsang, Mana Moassefi, Pouria Rouzrokh, Bardia Khosravi, Siddharth Agarwal, Lovekirat Dhaliwal, David A. Katzka, Catherine Hagen, Jason Lewis, Cadman L. Leggett, Bradley J. Erickson, and Prasad G. Iyer
- Subjects
Barrett Esophagus ,Deep Learning ,Hyperplasia ,Esophageal Neoplasms ,Disease Progression ,Gastroenterology ,Humans ,Radiology, Nuclear Medicine and imaging ,Adenocarcinoma - Abstract
The risk of progression in Barrett's esophagus (BE) increases with development of dysplasia. There is a critical need to improve the diagnosis of BE dysplasia, given substantial interobserver disagreement among expert pathologists and overdiagnosis of dysplasia by community pathologists. We developed a deep learning model to predict dysplasia grade on whole-slide imaging.We digitized nondysplastic BE (NDBE), low-grade dysplasia (LGD), and high-grade dysplasia (HGD) histology slides. Two expert pathologists confirmed all histology and digitally annotated areas of dysplasia. Training, validation, and test sets were created (by a random 70/20/10 split). We used an ensemble approach combining a "you only look once" model to identify regions of interest and histology class (NDBE, LGD, or HGD) followed by a ResNet101 model pretrained on ImageNet applied to the regions of interest. Diagnostic performance was determined for the whole slide.We included slides from 542 patients (164 NDBE, 226 LGD, and 152 HGD) yielding 8596 bounding boxes in the training set, 1946 bounding boxes in the validation set, and 840 boxes in the test set. When the ensemble model was used, sensitivity and specificity for LGD was 81.3% and 100%, respectively, and90% for NDBE and HGD. The overall positive predictive value and sensitivity metric (calculated as F1 score) was .91 for NDBE, .90 for LGD, and 1.0 for HGD.We successfully trained and validated a deep learning model to accurately identify dysplasia on whole-slide images. This model can potentially help improve the histologic diagnosis of BE dysplasia and the appropriate application of endoscopic therapy.
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.