93
views
0
recommends
+1 Recommend
1 collections
    4
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Resource Identification Initiative: A cultural shift in publishing

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          A central tenet in support of research reproducibility is the ability to uniquely identify research resources, i.e., reagents, tools, and materials that are used to perform experiments. However, current reporting practices for research resources are insufficient to allow humans and algorithms to identify the exact resources that are reported or answer basic questions such as “What other studies used resource X?” To address this issue, the Resource Identification Initiative was launched as a pilot project to improve the reporting standards for research resources in the methods sections of papers and thereby improve identifiability and reproducibility. The pilot engaged over 25 biomedical journal editors from most major publishers, as well as scientists and funding officials. Authors were asked to include Research Resource Identifiers (RRIDs) in their manuscripts prior to publication for three resource types: antibodies, model organisms, and tools (including software and databases). RRIDs represent accession numbers assigned by an authoritative database, e.g., the model organism databases, for each type of resource. To make it easier for authors to obtain RRIDs, resources were aggregated from the appropriate databases and their RRIDs made available in a central web portal ( www.scicrunch.org/resources). RRIDs meet three key criteria: they are machine readable, free to generate and access, and are consistent across publishers and journals. The pilot was launched in February of 2014 and over 300 papers have appeared that report RRIDs. The number of journals participating has expanded from the original 25 to more than 40. Here, we present an overview of the pilot project and its outcomes to date. We show that authors are generally accurate in performing the task of identifying resources and supportive of the goals of the project. We also show that identifiability of the resources pre- and post-pilot showed a dramatic improvement for all three resource types, suggesting that the project has had a significant impact on reproducibility relating to research resources.

          Related collections

          Most cited references13

          • Record: found
          • Abstract: found
          • Article: not found

          Power failure: why small sample size undermines the reliability of neuroscience.

          A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The neuroscience information framework: a data and knowledge environment for neuroscience.

            With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience's Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov , http://neurogateway.org , and other sites as they come on line.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              The Effects of FreeSurfer Version, Workstation Type, and Macintosh Operating System Version on Anatomical Volume and Cortical Thickness Measurements

              FreeSurfer is a popular software package to measure cortical thickness and volume of neuroanatomical structures. However, little if any is known about measurement reliability across various data processing conditions. Using a set of 30 anatomical T1-weighted 3T MRI scans, we investigated the effects of data processing variables such as FreeSurfer version (v4.3.1, v4.5.0, and v5.0.0), workstation (Macintosh and Hewlett-Packard), and Macintosh operating system version (OSX 10.5 and OSX 10.6). Significant differences were revealed between FreeSurfer version v5.0.0 and the two earlier versions. These differences were on average 8.8±6.6% (range 1.3–64.0%) (volume) and 2.8±1.3% (1.1–7.7%) (cortical thickness). About a factor two smaller differences were detected between Macintosh and Hewlett-Packard workstations and between OSX 10.5 and OSX 10.6. The observed differences are similar in magnitude as effect sizes reported in accuracy evaluations and neurodegenerative studies. The main conclusion is that in the context of an ongoing study, users are discouraged to update to a new major release of either FreeSurfer or operating system or to switch to a different type of workstation without repeating the analysis; results thus give a quantitative support to successive recommendations stated by FreeSurfer developers over the years. Moreover, in view of the large and significant cross-version differences, it is concluded that formal assessment of the accuracy of FreeSurfer is desirable.
                Bookmark

                Author and article information

                Journal
                F1000Res
                F1000Res
                F1000Research
                F1000Research
                F1000Research (London, UK )
                2046-1402
                29 May 2015
                2015
                : 4
                : 134
                Affiliations
                [1 ]Center for Research in Biological Systems, UCSD, la Jolla, CA, 92093, USA
                [2 ]Department of Medical Informatics & Clinical Epidemiology, OHSU, Portland, Oregon, 97239, USA
                [3 ]Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, 01605, USA
                [4 ]Karolinska Institutet, Stockholm, 171 77, Sweden
                [5 ]Leon and Norma Hess Center for Science and Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
                [6 ]Scientific Outreach, Faculty of 1000 Ltd, London, W1T 4LB, UK
                [7 ]John Wiley and Sons, Hoboken, NJ, 07030, USA
                [8 ]Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA
                [9 ]Elsevier, Amsterdam, 1043 NX, Netherlands
                [1 ]Mendeley and the Reproducibility Initiative, Mountain View, CA, USA
                [1 ]Division of Vaccine Discovery, La Jolla Institute for Allergy and Immunology, La Jolla, CA, USA
                Author notes

                AB, PRH, DNK, NW, EZS NV and members of the Resource Identification Initiative (RII) contributed to data gathering. AB and MB were responsible for portal creation. PRH and MEM acted as advocates for the RII. AB, MEM were responsible for repository management for Scicrunch and the Antibody Registry. JSG was the Scicrunch portal architect. PRH implemented use of RRIDs at the Journal of Comparative Neurology. MEM implemented use of RRIDs for Brain and Behaviour. MP implemented use of RRIDs at F1000Research. ST implemented use of RRIDs at the Journal of Comparative Neurology. DNK implemented the use of RRIDs at NeuroInformatics. SH implemented use of RRIDs at Frontiers and brought together stakeholders at preliminary meetings. NW scripted the model organism data. EZS championed the use of RRIDs at Elsevier, developed the App and created Figure 4. NV and AB were responsible for data analysis. AB, MB, JSG, MAH, PRH, MEM and NV prepared the manuscript. Members of the RINL RII wrote blogs and participated in brainstorming meetings for more than a year. All authors have seen and agreed to the final content of the manuscript.

                Competing interests: The authors declared no competing interests.

                Competing interests: I am co-founder of the Reproducibility Initiative, so I'm favorably inclined towards anything that looks to improve reproducibility. I also work for Mendeley, a service for researchers which is engaged in text and data mining of literature. Mendeley is owned by Elsevier, which developed the 'Antibody data from this article" widget shown as an example in this paper.

                Competing interests: No competing interests were disclosed.

                Article
                10.12688/f1000research.6555.1
                4648211
                26594330
                148d180e-bc3a-49cd-a75d-e5e2bb686bb7
                Copyright: © 2015 Bandrowski A et al.

                This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 26 May 2015
                Funding
                Funded by: NIF
                Award ID: HHSN271200577531C/PHS/HHS/United States
                Funded by: NIDDK
                Award ID: 1U24DK097771-01
                Funded by: Monarch
                Award ID: 5R24OD011883
                This work was supported by: an NIF grant to Martone PI (HHSN271200577531C/PHS HHS/United States); a NIDDK grant to Martone PI (1U24DK097771-01); and a grant from Monarch to Haendel PI (5R24OD011883).
                I confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Research Article
                Articles
                Data Sharing

                resource identifiers,multi-centre initiative,publishing,pre-pilot data,post-pilot data

                Comments

                Comment on this article