Back to Search Start Over

Statistical quantification of confounding bias in predictive modelling

Authors :
Spisak, Tamas
Publication Year :
2021

Abstract

The lack of non-parametric statistical tests for confounding bias significantly hampers the development of robust, valid and generalizable predictive models in many fields of research. Here I propose the partial and full confounder tests, which, for a given confounder variable, probe the null hypotheses of unconfounded and fully confounded models, respectively. The tests provide a strict control for Type I errors and high statistical power, even for non-normally and non-linearly dependent predictions, often seen in machine learning. Applying the proposed tests on models trained on functional brain connectivity data from the Human Connectome Project and the Autism Brain Imaging Data Exchange dataset reveals confounders that were previously unreported or found to be hard to correct for with state-of-the-art confound mitigation approaches. The tests, implemented in the package mlconfound (https://mlconfound.readthedocs.io), can aid the assessment and improvement of the generalizability and neurobiological validity of predictive models and, thereby, foster the development of clinically useful machine learning biomarkers.<br />Comment: 20 pages, 7 figures. The manuscript is associated with the the python package `mlconfound`: https://mlconfound.readthedocs.io See manuscript repository, including fully reproducible analysis code, here: https://github.com/pni-lab/mlconfound-manuscript

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2111.00814
Document Type :
Working Paper