25
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Improving preclinical studies through replications

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The purpose of preclinical research is to inform the development of novel diagnostics or therapeutics, and the results of experiments on animal models of disease often inform the decision to conduct studies in humans. However, a substantial number of clinical trials fail, even when preclinical studies have apparently demonstrated the efficacy of a given intervention. A number of large-scale replication studies are currently trying to identify the factors that influence the robustness of preclinical research. Here, we discuss replications in the context of preclinical research trajectories, and argue that increasing validity should be a priority when selecting experiments to replicate and when performing the replication. We conclude that systematically improving three domains of validity – internal, external and translational – will result in a more efficient allocation of resources, will be more ethical, and will ultimately increase the chances of successful translation.

          Related collections

          Most cited references64

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research

          Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the “ARRIVE Essential 10,” which constitutes the minimum requirement, and the “Recommended Set,” which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration (E&E) document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            PSYCHOLOGY. Estimating the reproducibility of psychological science.

            Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Strong Inference: Certain systematic methods of scientific thinking may produce much more rapid progress than others.

                Bookmark

                Author and article information

                Contributors
                Role: Senior Editor
                Role: Reviewing Editor
                Journal
                eLife
                Elife
                eLife
                eLife
                eLife Sciences Publications, Ltd
                2050-084X
                12 January 2021
                2021
                : 10
                : e62101
                Affiliations
                [1 ]Department of Experimental Neurology, Charité–Universitätsmedizin BerlinGermany
                [2 ]BIH QUEST Center for Transforming Biomedical Research, Berlin Institute of Health BerlinGermany
                eLife United Kingdom
                eLife United Kingdom
                eLife United Kingdom
                Beatson Institute United Kingdom
                University of Bern Switzerland
                Author notes
                [†]

                These authors contributed equally to this work.

                Author information
                https://orcid.org/0000-0002-7153-2894
                https://orcid.org/0000-0001-9224-1722
                https://orcid.org/0000-0003-0755-6119
                https://orcid.org/0000-0002-8731-3530
                Article
                62101
                10.7554/eLife.62101
                7817176
                33432925
                1274ed91-0c2d-4197-98a8-115a4b3801e5
                © 2021, Drude et al

                This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

                History
                : 13 August 2020
                : 12 January 2021
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100002347, Bundesministerium für Bildung und Forschung;
                Award ID: 01KC1901A
                Award Recipient :
                The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
                Categories
                Feature Article
                Medicine
                Science Forum
                Custom metadata
                Increasing validity should be a priority when deciding which preclinical experiments to replicate and when performing replications.
                5

                Life sciences
                replication,preclinical research,validity,reproducibility,translation,science forum,none
                Life sciences
                replication, preclinical research, validity, reproducibility, translation, science forum, none

                Comments

                Comment on this article