1. Comparing Performances of Predictive Models of Toxicity after Radiotherapy for Breast Cancer Using Different Machine Learning Approaches.
- Author
-
Ubeira-Gabellini, Maria Giulia, Mori, Martina, Palazzo, Gabriele, Cicchetti, Alessandro, Mangili, Paola, Pavarini, Maddalena, Rancati, Tiziana, Fodor, Andrei, del Vecchio, Antonella, Di Muzio, Nadia Gisella, and Fiorino, Claudio
- Subjects
PREDICTIVE tests ,DATABASE management ,DATA analysis ,RECEIVER operating characteristic curves ,RESEARCH funding ,RADIATION injuries ,BREAST tumors ,LOGISTIC regression analysis ,TREATMENT effectiveness ,RADIATION dosimetry ,DESCRIPTIVE statistics ,STATISTICS ,MACHINE learning - Abstract
Simple Summary: Studies comparing performances of machine learning (ML) methods in building predictive models of toxicity in RT are rare. Thanks to the availability of a large cohort (n = 1314) of breast cancer patients homogeneously treated with tangential fields, different ML approaches could be compared. This work shows how more complex models typically achieve higher performances. At the same time, for this test case, the importance is given mainly by a few variables, and toxicity can be predicted by simpler models with similar performances. The availability of more individually characterizing features (here partially missing) is expected to have a likely much higher impact than the choice of the best-performing ML/DL approach. Purpose. Different ML models were compared to predict toxicity in RT on a large cohort (n = 1314). Methods. The endpoint was RTOG G2/G3 acute toxicity, resulting in 204/1314 patients with the event. The dataset, including 25 clinical, anatomical, and dosimetric features, was split into 984 for training and 330 for internal tests. The dataset was standardized; features with a high p-value at univariate LR and with Spearman ρ > 0.8 were excluded; synthesized data of the minority were generated to compensate for class imbalance. Twelve ML methods were considered. Model optimization and sequential backward selection were run to choose the best models with a parsimonious feature number. Finally, feature importance was derived for every model. Results. The model's performance was compared on a training–test dataset over different metrics: the best performance model was LightGBM. Logistic regression with three variables (LR3) selected via bootstrapping showed performances similar to the best-performing models. The AUC of test data is slightly above 0.65 for the best models (highest value: 0.662 with LightGBM). Conclusions. No model performed the best for all metrics: more complex ML models had better performances; however, models with just three features showed performances comparable to the best models using many (n = 13–19) features. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF