Back to Search Start Over

Numerical uncertainty in analytical pipelines lead to impactful variability in brain networks

Authors :
Gaël Varoquaux
de Oliveira Castro P
Alan C. Evans
Eric Petit
Gregory Kiar
Bratislav Misic
Yohan Chatelain
Tristan Glatard
Ariel Rokem
Montreal Neurological Institute and Hospital
McGill University = Université McGill [Montréal, Canada]
Concordia University [Montreal]
Laboratoire d'Informatique Parallélisme Réseaux Algorithmes Distribués (LI-PaRAD)
Université de Versailles Saint-Quentin-en-Yvelines (UVSQ)
Intel France [Meudon]
University of Washington [Seattle]
Modelling brain structure, function and variability based on high-field MRI data (PARIETAL)
Service NEUROSPIN (NEUROSPIN)
Université Paris-Saclay-Direction de Recherche Fondamentale (CEA) (DRF (CEA))
Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay-Direction de Recherche Fondamentale (CEA) (DRF (CEA))
Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Inria Saclay - Ile de France
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
McConnell Brain Imaging Centre (MNI)
McGill University = Université McGill [Montréal, Canada]-McGill University = Université McGill [Montréal, Canada]
Laboratoire Exascale Computing Research (ERC)
Université de Versailles Saint-Quentin-en-Yvelines (UVSQ)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Intel France [Meudon]
This research was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) (award no. CGSD3-519497-2018). This work was also supported in part by funding provided by Brain Canada, in partner ship with Health Canada, for the Canadian Open Neuroscience Platform initiative
Inria Saclay - Ile de France
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Service NEUROSPIN (NEUROSPIN)
Direction de Recherche Fondamentale (CEA) (DRF (CEA))
Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay
Source :
PLoS ONE, PLoS ONE, 2021, 16 (11), pp.e0250755. ⟨10.1371/journal.pone.0250755⟩, PLoS ONE, Vol 16, Iss 11 (2021), PLoS ONE, Vol 16, Iss 11, p e0250755 (2021)
Publication Year :
2021
Publisher :
Public Library of Science, 2021.

Abstract

The analysis of brain-imaging data requires complex and often non-linear transformations to support findings on brain function or pathologies. And yet, recent work has shown that variability in the choices that one makes when analyzing data can lead to quantitatively and qualitatively different results, endangering the trust in conclusions1–3. Even within a given method or analytical technique, numerical instabilities could compromise findings4–7. We instrumented a structural-connectome estimation pipeline with Monte Carlo Arithmetic8, 9, a technique to introduce random noise in floating-point computations, and evaluated the stability of the derived connectomes, their features10, 11, and the impact on a downstream analysis12, 13. The stability of results was found to be highly dependent upon which features of the connectomes were evaluated, and ranged from perfectly stable (i.e. no observed variability across executions) to highly unstable (i.e. the results contained no trustworthy significant information). While the extreme range and variability in results presented here could severely hamper our understanding of brain organization in brain-imaging studies, it also leads to an increase in the reliability of datasets. This paper highlights the potential of leveraging the induced variance in estimates of brain connectivity to reduce the bias in networks alongside increasing the robustness of their applications in the detection or classification of individual differences. This paper demonstrates that stability evaluations are necessary for understanding error and bias inherent to scientific computing, and that they should be a component of typical analytical workflows.

Details

Language :
English
ISSN :
19326203
Volume :
16
Issue :
11
Database :
OpenAIRE
Journal :
PLoS ONE
Accession number :
edsair.doi.dedup.....9dad9690d45b34491a1efe6c7e973c13