1. Machine learning algorithms for outcome prediction in (chemo)radiotherapy: An empirical comparison of classifiers.
- Author
-
Deist, Timo M, Dankers, Frank JWM, Valdes, Gilmer, Wijsman, Robin, Hsu, I-Chow, Oberije, Cary, Lustberg, Tim, van Soest, Johan, Hoebers, Frank, Jochems, Arthur, El Naqa, Issam, Wee, Leonard, Morin, Olivier, Raleigh, David R, Bots, Wouter, Kaanders, Johannes H, Belderbos, José, Kwint, Margriet, Solberg, Timothy, Monshouwer, René, Bussink, Johan, Dekker, Andre, and Lambin, Philippe
- Subjects
Humans ,Neoplasms ,Prognosis ,Area Under Curve ,Logistic Models ,Decision Trees ,Software ,Chemoradiotherapy ,Machine Learning ,Neural Networks ,Computer ,classification ,machine learning ,outcome prediction ,predictive modeling ,radiotherapy ,Cancer ,Other Physical Sciences ,Biomedical Engineering ,Oncology and Carcinogenesis ,Nuclear Medicine & Medical Imaging - Abstract
PurposeMachine learning classification algorithms (classifiers) for prediction of treatment response are becoming more popular in radiotherapy literature. General Machine learning literature provides evidence in favor of some classifier families (random forest, support vector machine, gradient boosting) in terms of classification performance. The purpose of this study is to compare such classifiers specifically for (chemo)radiotherapy datasets and to estimate their average discriminative performance for radiation treatment outcome prediction.MethodsWe collected 12 datasets (3496 patients) from prior studies on post-(chemo)radiotherapy toxicity, survival, or tumor control with clinical, dosimetric, or blood biomarker features from multiple institutions and for different tumor sites, that is, (non-)small-cell lung cancer, head and neck cancer, and meningioma. Six common classification algorithms with built-in feature selection (decision tree, random forest, neural network, support vector machine, elastic net logistic regression, LogitBoost) were applied on each dataset using the popular open-source R package caret. The R code and documentation for the analysis are available online (https://github.com/timodeist/classifier_selection_code). All classifiers were run on each dataset in a 100-repeated nested fivefold cross-validation with hyperparameter tuning. Performance metrics (AUC, calibration slope and intercept, accuracy, Cohen's kappa, and Brier score) were computed. We ranked classifiers by AUC to determine which classifier is likely to also perform well in future studies. We simulated the benefit for potential investigators to select a certain classifier for a new dataset based on our study (pre-selection based on other datasets) or estimating the best classifier for a dataset (set-specific selection based on information from the new dataset) compared with uninformed classifier selection (random selection).ResultsRandom forest (best in 6/12 datasets) and elastic net logistic regression (best in 4/12 datasets) showed the overall best discrimination, but there was no single best classifier across datasets. Both classifiers had a median AUC rank of 2. Preselection and set-specific selection yielded a significant average AUC improvement of 0.02 and 0.02 over random selection with an average AUC rank improvement of 0.42 and 0.66, respectively.ConclusionRandom forest and elastic net logistic regression yield higher discriminative performance in (chemo)radiotherapy outcome and toxicity prediction than other studied classifiers. Thus, one of these two classifiers should be the first choice for investigators when building classification models or to benchmark one's own modeling results against. Our results also show that an informed preselection of classifiers based on existing datasets can improve discrimination over random selection.
- Published
- 2018