3,006
views
0
recommends
+1 Recommend
3 collections
    50
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Why most published research findings are false.

      PLoS Medicine

      Read this article at

      ScienceOpenPublisherPMC
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

          Related collections

          Most cited references28

          • Record: found
          • Abstract: found
          • Article: not found

          Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses.

          The Quality of Reporting of Meta-analyses (QUOROM) conference was convened to address standards for improving the quality of reporting of meta-analyses of clinical randomised controlled trials (RCTs). The QUOROM group consisted of 30 clinical epidemiologists, clinicians, statisticians, editors, and researchers. In conference, the group was asked to identify items they thought should be included in a checklist of standards. Whenever possible, checklist items were guided by research evidence suggesting that failure to adhere to the item proposed could lead to biased results. A modified Delphi technique was used in assessing candidate items. The conference resulted in the QUOROM statement, a checklist, and a flow diagram. The checklist describes our preferred way to present the abstract, introduction, methods, results, and discussion sections of a report of a meta-analysis. It is organised into 21 headings and subheadings regarding searches, selection, validity assessment, data abstraction, study characteristics, and quantitative data synthesis, and in the results with "trial flow", study characteristics, and quantitative data synthesis; research documentation was identified for eight of the 18 items. The flow diagram provides information about both the numbers of RCTs identified, included, and excluded and the reasons for exclusion of trials. We hope this report will generate further thought about ways to improve the quality of reports of meta-analyses of RCTs and that interested readers, reviewers, researchers, and editors will use the QUOROM statement and generate ideas for its improvement.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials.

            To comprehend the results of a randomised controlled trial (RCT), readers must understand its design, conduct, analysis, and interpretation. That goal can be achieved only through total transparency from authors. Despite several decades of educational efforts, the reporting of RCTs needs improvement. Investigators and editors developed the original CONSORT (Consolidated Standards of Reporting Trials) statement to help authors improve reporting by use of a checklist and flow diagram. The revised CONSORT statement presented here incorporates new evidence and addresses some criticisms of the original statement. The checklist items pertain to the content of the Title, Abstract, Introduction, Methods, Results, and Discussion. The revised checklist includes 22 items selected because empirical evidence indicates that not reporting this information is associated with biased estimates of treatment effect, or because the information is essential to judge the reliability or relevance of the findings. We intended the flow diagram to depict the passage of participants through an RCT. The revised flow diagram depicts information from four stages of a trial (enrollment, intervention allocation, follow-up, and analysis). The diagram explicitly shows the number of participants, for each intervention group, included in the primary data analysis. Inclusion of these numbers allows the reader to judge whether the authors have done an intention-to-treat analysis. In sum, the CONSORT statement is intended to improve the reporting of an RCT, enabling readers to understand a trial's conduct and to assess the validity of its results.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Prediction of cancer outcome with microarrays: a multiple random validation strategy

              General studies of microarray gene-expression profiling have been undertaken to predict cancer outcome. Knowledge of this gene-expression profile or molecular signature should improve treatment of patients by allowing treatment to be tailored to the severity of the disease. We reanalysed data from the seven largest published studies that have attempted to predict prognosis of cancer patients on the basis of DNA microarray analysis. The standard strategy is to identify a molecular signature (ie, the subset of genes most differentially expressed in patients with different outcomes) in a training set of patients and to estimate the proportion of misclassifications with this signature on an independent validation set of patients. We expanded this strategy (based on unique training and validation sets) by using multiple random sets, to study the stability of the molecular signature and the proportion of misclassifications. The list of genes identified as predictors of prognosis was highly unstable; molecular signatures strongly depended on the selection of patients in the training sets. For all but one study, the proportion misclassified decreased as the number of patients in the training set increased. Because of inadequate validation, our chosen studies published overoptimistic results compared with those from our own analyses. Five of the seven studies did not classify patients better than chance. The prognostic value of published microarray results in cancer studies should be considered with caution. We advocate the use of validation by repeated random sampling.
                Bookmark

                Author and article information

                Journal
                16060722
                1182327
                10.1371/journal.pmed.0020124

                Comments

                Comment on this article