8 results on '"Martone, Maryann E."'
Search Results
2. A Standards Organization for Open and FAIR Neuroscience: the International Neuroinformatics Coordinating Facility
- Author
-
Abrams, Mathew Birdsall, Bjaalie, Jan G, Das, Samir, Egan, Gary F, Ghosh, Satrajit S, Goscinski, Wojtek J, Grethe, Jeffrey S, Kotaleski, Jeanette Hellgren, Ho, Eric Tatt Wei, Kennedy, David N, Lanyon, Linda J, Leergaard, Trygve B, Mayberg, Helen S, Milanesi, Luciano, Mouček, Roman, Poline, JB, Roy, Prasun K, Strother, Stephen C, Tang, Tong Boon, Tiesinga, Paul, Wachtler, Thomas, Wójcik, Daniel K, and Martone, Maryann E
- Subjects
Networking and Information Technology R&D (NITRD) ,Neurosciences ,Underpinning research ,1.5 Resources and infrastructure (underpinning) ,Responsible Consumption and Production ,Reproducibility of Results ,Neuroinformatics ,Standards and best practices ,FAIR principles ,Standards organization ,Neuroscience ,INCF ,INCF endorsement process ,Biochemistry and Cell Biology ,Neurology & Neurosurgery - Abstract
There is great need for coordination around standards and best practices in neuroscience to support efforts to make neuroscience a data-centric discipline. Major brain initiatives launched around the world are poised to generate huge stores of neuroscience data. At the same time, neuroscience, like many domains in biomedicine, is confronting the issues of transparency, rigor, and reproducibility. Widely used, validated standards and best practices are key to addressing the challenges in both big and small data science, as they are essential for integrating diverse data and for developing a robust, effective, and sustainable infrastructure to support open and reproducible neuroscience. However, developing community standards and gaining their adoption is difficult. The current landscape is characterized both by a lack of robust, validated standards and a plethora of overlapping, underdeveloped, untested and underutilized standards and best practices. The International Neuroinformatics Coordinating Facility (INCF), an independent organization dedicated to promoting data sharing through the coordination of infrastructure and standards, has recently implemented a formal procedure for evaluating and endorsing community standards and best practices in support of the FAIR principles. By formally serving as a standards organization dedicated to open and FAIR neuroscience, INCF helps evaluate, promulgate, and coordinate standards and best practices across neuroscience. Here, we provide an overview of the process and discuss how neuroscience can benefit from having a dedicated standards body.
- Published
- 2022
3. Promoting FAIR Data Through Community-driven Agile Design: the Open Data Commons for Spinal Cord Injury (odc-sci.org).
- Author
-
Torres-Espín, Abel, Almeida, Carlos A, Chou, Austin, Huie, J Russell, Chiu, Michael, Vavrek, Romana, Sacramento, Jeff, Orr, Michael B, Gensel, John C, Grethe, Jeffery S, Martone, Maryann E, Fouad, Karim, Ferguson, Adam R, and STREET-FAIR Workshop Participants
- Subjects
STREET-FAIR Workshop Participants ,Humans ,Spinal Cord Injuries ,Reproducibility of Results ,Information Dissemination ,Ecosystem ,Biomedical Research ,Data sharing ,FAIR ,community repository ,data reuse ,neurotrauma ,spinal cord injury ,Spinal Cord Injury ,Neurodegenerative ,Traumatic Head and Spine Injury ,Networking and Information Technology R&D (NITRD) ,Physical Injury - Accidents and Adverse Effects ,Neurosciences ,Generic health relevance ,Biochemistry and Cell Biology ,Neurology & Neurosurgery - Abstract
The past decade has seen accelerating movement from data protectionism in publishing toward open data sharing to improve reproducibility and translation of biomedical research. Developing data sharing infrastructures to meet these new demands remains a challenge. One model for data sharing involves simply attaching data, irrespective of its type, to publisher websites or general use repositories. However, some argue this creates a 'data dump' that does not promote the goals of making data Findable, Accessible, Interoperable and Reusable (FAIR). Specialized data sharing communities offer an alternative model where data are curated by domain experts to make it both open and FAIR. We report on our experiences developing one such data-sharing ecosystem focusing on 'long-tail' preclinical data, the Open Data Commons for Spinal Cord Injury (odc-sci.org). ODC-SCI was developed with community-based agile design requirements directly pulled from a series of workshops with multiple stakeholders (researchers, consumers, non-profit funders, governmental agencies, journals, and industry members). ODC-SCI focuses on heterogeneous tabular data collected by preclinical researchers including bio-behaviour, histopathology findings and molecular endpoints. This has led to an example of a specialized neurocommons that is well-embraced by the community it aims to serve. In the present paper, we provide a review of the community-based design template and describe the adoption by the community including a high-level review of current data assets, publicly released datasets, and web analytics. Although odc-sci.org is in its late beta stage of development, it represents a successful example of a specialized data commons that may serve as a model for other fields.
- Published
- 2022
4. Improving transparency and scientific rigor in academic publishing
- Author
-
Prager, Eric M, Chambers, Karen E, Plotkin, Joshua L, McArthur, David L, Bandrowski, Anita E, Bansal, Nidhi, Martone, Maryann E, Bergstrom, Hadley C, Bespalov, Anton, and Graf, Chris
- Subjects
Biotechnology ,Clinical Research ,Biomedical Research ,Data Accuracy ,Editorial Policies ,Humans ,Peer Review ,Research ,Publishing ,Quality Improvement ,Reproducibility of Results ,Research Design ,Research Personnel ,Open Science ,peer review ,policy ,publishing ,scientific rigor ,transparency - Abstract
Progress in basic and clinical research is slowed when researchers fail to provide a complete and accurate report of how a study was designed, executed, and the results analyzed. Publishing rigorous scientific research involves a full description of the methods, materials, procedures, and outcomes. Investigators may fail to provide a complete description of how their study was designed and executed because they may not know how to accurately report the information or the mechanisms are not in place to facilitate transparent reporting. Here, we provide an overview of how authors can write manuscripts in a transparent and thorough manner. We introduce a set of reporting criteria that can be used for publishing, including recommendations on reporting the experimental design and statistical approaches. We also discuss how to accurately visualize the results and provide recommendations for peer reviewers to enhance rigor and transparency. Incorporating transparency practices into research manuscripts will significantly improve the reproducibility of the results by independent laboratories. SIGNIFICANCE: Failure to replicate research findings often arises from errors in the experimental design and statistical approaches. By providing a full account of the experimental design, procedures, and statistical approaches, researchers can address the reproducibility crisis and improve the sustainability of research outcomes. In this piece, we discuss the key issues leading to irreproducibility and provide general approaches to improving transparency and rigor in reporting, which could assist in making research more reproducible.
- Published
- 2019
5. Data sharing in psychology.
- Author
-
Martone, Maryann E, Garcia-Castro, Alexander, and VandenBos, Gary R
- Subjects
Humans ,Reproducibility of Results ,Information Dissemination ,Psychology ,Research ,open data ,database ,data repository ,FAIR ,Social Psychology ,Cognitive Sciences - Abstract
Routine data sharing, defined here as the publication of the primary data and any supporting materials required to interpret the data acquired as part of a research study, is still in its infancy in psychology, as in many domains. Nevertheless, with increased scrutiny on reproducibility and more funder mandates requiring sharing of data, the issues surrounding data sharing are moving beyond whether data sharing is a benefit or a bane to science, to what data should be shared and how. Here, we present an overview of these issues, specifically focusing on the sharing of so-called "long tail" data, that is, data generated by individual laboratories as part of largely hypothesis-driven research. We draw on experiences in other domains to discuss attitudes toward data sharing, cost-benefits, best practices and infrastructure. We argue that the publishing of data sets is an integral component of 21st-century scholarship. Moreover, although not all issues around how and what to share have been resolved, a consensus on principles and best practices for effective data sharing and the infrastructure for sharing many types of data are largely in place. (PsycINFO Database Record
- Published
- 2018
6. The FAIR Guiding Principles for scientific data management and stewardship.
- Author
-
Wilkinson, Mark D, Dumontier, Michel, Aalbersberg, I Jsbrand Jan, Appleton, Gabrielle, Axton, Myles, Baak, Arie, Blomberg, Niklas, Boiten, Jan-Willem, da Silva Santos, Luiz Bonino, Bourne, Philip E, Bouwman, Jildau, Brookes, Anthony J, Clark, Tim, Crosas, Mercè, Dillo, Ingrid, Dumon, Olivier, Edmunds, Scott, Evelo, Chris T, Finkers, Richard, Gonzalez-Beltran, Alejandra, Gray, Alasdair JG, Groth, Paul, Goble, Carole, Grethe, Jeffrey S, Heringa, Jaap, 't Hoen, Peter AC, Hooft, Rob, Kuhn, Tobias, Kok, Ruben, Kok, Joost, Lusher, Scott J, Martone, Maryann E, Mons, Albert, Packer, Abel L, Persson, Bengt, Rocca-Serra, Philippe, Roos, Marco, van Schaik, Rene, Sansone, Susanna-Assunta, Schultes, Erik, Sengstag, Thierry, Slater, Ted, Strawn, George, Swertz, Morris A, Thompson, Mark, van der Lei, Johan, van Mulligen, Erik, Velterop, Jan, Waagmeester, Andra, Wittenburg, Peter, Wolstencroft, Katherine, Zhao, Jun, and Mons, Barend
- Subjects
Data Collection ,Reproducibility of Results ,Research Design ,Database Management Systems ,Guidelines as Topic ,Data Curation - Abstract
There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders-representing academia, industry, funding agencies, and scholarly publishers-have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community.
- Published
- 2016
7. Resource Disambiguator for the Web: Extracting Biomedical Resources and Their Citations from the Scientific Literature.
- Author
-
Ozyurt, Ibrahim Burak, Grethe, Jeffrey S, Martone, Maryann E, and Bandrowski, Anita E
- Subjects
Humans ,Registries ,Reproducibility of Results ,Computational Biology ,Neurosciences ,Biomedical Research ,Publications ,Internet ,Software ,Information Storage and Retrieval ,Databases ,Factual ,Databases ,Factual ,MD Multidisciplinary ,General Science & Technology - Abstract
The NIF Registry developed and maintained by the Neuroscience Information Framework is a cooperative project aimed at cataloging research resources, e.g., software tools, databases and tissue banks, funded largely by governments and available as tools to research scientists. Although originally conceived for neuroscience, the NIF Registry has over the years broadened in the scope to include research resources of general relevance to biomedical research. The current number of research resources listed by the Registry numbers over 13K. The broadening in scope to biomedical science led us to re-christen the NIF Registry platform as SciCrunch. The NIF/SciCrunch Registry has been cataloging the resource landscape since 2006; as such, it serves as a valuable dataset for tracking the breadth, fate and utilization of these resources. Our experience shows research resources like databases are dynamic objects, that can change location and scope over time. Although each record is entered manually and human-curated, the current size of the registry requires tools that can aid in curation efforts to keep content up to date, including when and where such resources are used. To address this challenge, we have developed an open source tool suite, collectively termed RDW: Resource Disambiguator for the (Web). RDW is designed to help in the upkeep and curation of the registry as well as in enhancing the content of the registry by automated extraction of resource candidates from the literature. The RDW toolkit includes a URL extractor from papers, resource candidate screen, resource URL change tracker, resource content change tracker. Curators access these tools via a web based user interface. Several strategies are used to optimize these tools, including supervised and unsupervised learning algorithms as well as statistical text analysis. The complete tool suite is used to enhance and maintain the resource registry as well as track the usage of individual resources through an innovative literature citation index honed for research resources. Here we present an overview of the Registry and show how the RDW tools are used in curation and usage tracking.
- Published
- 2016
8. The Resource Identification Initiative: A cultural shift in publishing
- Author
-
Bandrowski, Anita, Brush, Matthew, Grethe, Jeffery S, Haendel, Melissa A, Kennedy, David N, Hill, Sean, Hof, Patrick R, Martone, Maryann E, Pols, Maaike, Tan, Serena C, Washington, Nicole, Zudilova‐Seinstra, Elena, and Vasilevsky, Nicole
- Subjects
Biological Sciences ,Bioinformatics and Computational Biology ,Aetiology ,2.6 Resources and infrastructure (aetiology) ,Generic health relevance ,Animals ,Antibodies ,Data Accuracy ,Data Curation ,Databases ,Factual ,Internet ,Models ,Animal ,Pilot Projects ,Publishing ,Reproducibility of Results ,Software ,research resources ,Resource Identification Initiative ,identifiability ,Zoology ,Neurosciences ,Medical Physiology ,Neurology & Neurosurgery - Abstract
A central tenet in support of research reproducibility is the ability to uniquely identify research resources, i.e., reagents, tools, and materials that are used to perform experiments. However, current reporting practices for research resources are insufficient to identify the exact resources that are reported or to answer basic questions such as "How did other studies use resource X?" To address this issue, the Resource Identification Initiative was launched as a pilot project to improve the reporting standards for research resources in the Methods sections of articles and thereby improve identifiability and scientific reproducibility. The pilot engaged over 25 biomedical journal editors from most major publishers, as well as scientists and funding officials. Authors were asked to include Research Resource Identifiers (RRIDs) in their articles prior to publication for three resource types: antibodies, model organisms, and tools (i.e., software and databases). RRIDs are assigned by an authoritative database, for example, a model organism database for each type of resource. To make it easier for authors to obtain RRIDs, resources were aggregated from the appropriate databases and their RRIDs made available in a central Web portal (http://scicrunch.org/resources). RRIDs meet three key criteria: they are machine-readable, free to generate and access, and are consistent across publishers and journals. The pilot was launched in February of 2014 and over 300 articles have appeared that report RRIDs. The number of journals participating has expanded from the original 25 to more than 40, with RRIDs appearing in 62 different journals to date. Here we present an overview of the pilot project and its outcomes to date. We show that authors are able to identify resources and are supportive of the goals of the project. Identifiability of the resources post-pilot showed a dramatic improvement for all three resource types, suggesting that the project has had a significant impact on identifiability of research resources.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.