34
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Rigorous Science: a How-To Guide

      editorial
      a , , b
      mBio
      American Society for Microbiology

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          ABSTRACT

          Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education.

          Related collections

          Most cited references12

          • Record: found
          • Abstract: found
          • Article: not found

          Revised standards for statistical evidence.

          Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Conjectures and Refutations

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              When Quality Beats Quantity: Decision Theory, Drug Discovery, and the Reproducibility Crisis

              A striking contrast runs through the last 60 years of biopharmaceutical discovery, research, and development. Huge scientific and technological gains should have increased the quality of academic science and raised industrial R&D efficiency. However, academia faces a "reproducibility crisis"; inflation-adjusted industrial R&D costs per novel drug increased nearly 100 fold between 1950 and 2010; and drugs are more likely to fail in clinical development today than in the 1970s. The contrast is explicable only if powerful headwinds reversed the gains and/or if many "gains" have proved illusory. However, discussions of reproducibility and R&D productivity rarely address this point explicitly. The main objectives of the primary research in this paper are: (a) to provide quantitatively and historically plausible explanations of the contrast; and (b) identify factors to which R&D efficiency is sensitive. We present a quantitative decision-theoretic model of the R&D process. The model represents therapeutic candidates (e.g., putative drug targets, molecules in a screening library, etc.) within a “measurement space", with candidates' positions determined by their performance on a variety of assays (e.g., binding affinity, toxicity, in vivo efficacy, etc.) whose results correlate to a greater or lesser degree. We apply decision rules to segment the space, and assess the probability of correct R&D decisions. We find that when searching for rare positives (e.g., candidates that will successfully complete clinical development), changes in the predictive validity of screening and disease models that many people working in drug discovery would regard as small and/or unknowable (i.e., an 0.1 absolute change in correlation coefficient between model output and clinical outcomes in man) can offset large (e.g., 10 fold, even 100 fold) changes in models’ brute-force efficiency. We also show how validity and reproducibility correlate across a population of simulated screening and disease models. We hypothesize that screening and disease models with high predictive validity are more likely to yield good answers and good treatments, so tend to render themselves and their diseases academically and commercially redundant. Perhaps there has also been too much enthusiasm for reductionist molecular models which have insufficient predictive validity. Thus we hypothesize that the average predictive validity of the stock of academically and industrially "interesting" screening and disease models has declined over time, with even small falls able to offset large gains in scientific knowledge and brute-force efficiency. The rate of creation of valid screening and disease models may be the major constraint on R&D productivity.
                Bookmark

                Author and article information

                Contributors
                Role: Founding Editor in Chief, mBio
                Role: Editor in Chief, Infection and Immunity
                Journal
                mBio
                MBio
                mbio
                mbio
                mBio
                mBio
                American Society for Microbiology (1752 N St., N.W., Washington, DC )
                2150-7511
                8 November 2016
                Nov-Dec 2016
                : 7
                : 6
                : e01902-16
                Affiliations
                [a ]Department of Molecular Microbiology and Immunology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
                [b ]Departments of Laboratory Medicine and Microbiology, University of Washington School of Medicine, Seattle, Washington, USA
                Author notes
                Address correspondence to Arturo Casadevall, acasade1@ 123456jhu.edu .
                Author information
                http://orcid.org/0000-0002-9402-9167
                Article
                mBio01902-16
                10.1128/mBio.01902-16
                5111411
                27834205
                b69cf723-3b2e-4029-baa7-a09b4c462150
                Copyright © 2016 Casadevall and Fang.

                This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license.

                History
                Page count
                Figures: 1, Tables: 1, Equations: 0, References: 24, Pages: 4, Words: 3294
                Funding
                This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
                Categories
                Editorial
                Custom metadata
                November/December 2016

                Life sciences
                Life sciences

                Comments

                Comment on this article