Back to Search Start Over

Reproducible big data science: A case study in continuous FAIRness.

Authors :
Madduri R
Chard K
D'Arcy M
Jung SC
Rodriguez A
Sulakhe D
Deutsch E
Funk C
Heavner B
Richards M
Shannon P
Glusman G
Price N
Kesselman C
Foster I
Source :
PloS one [PLoS One] 2019 Apr 11; Vol. 14 (4), pp. e0213013. Date of Electronic Publication: 2019 Apr 11 (Print Publication: 2019).
Publication Year :
2019

Abstract

Big biomedical data create exciting opportunities for discovery, but make it difficult to capture analyses and outputs in forms that are findable, accessible, interoperable, and reusable (FAIR). In response, we describe tools that make it easy to capture, and assign identifiers to, data and code throughout the data lifecycle. We illustrate the use of these tools via a case study involving a multi-step analysis that creates an atlas of putative transcription factor binding sites from terabytes of ENCODE DNase I hypersensitive sites sequencing data. We show how the tools automate routine but complex tasks, capture analysis algorithms in understandable and reusable forms, and harness fast networks and powerful cloud computers to process data rapidly, all without sacrificing usability or reproducibility-thus ensuring that big data are not hard-to-(re)use data. We evaluate our approach via a user study, and show that 91% of participants were able to replicate a complex analysis involving considerable data volumes.<br />Competing Interests: The authors have declared that no competing interests exist.

Details

Language :
English
ISSN :
1932-6203
Volume :
14
Issue :
4
Database :
MEDLINE
Journal :
PloS one
Publication Type :
Academic Journal
Accession number :
30973881
Full Text :
https://doi.org/10.1371/journal.pone.0213013