Blog
About

  • Record: found
  • Abstract: found
  • Article: found
Is Open Access

The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration

Read this article at

Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

      Abstract

      Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.

      Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.

      The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.

      Related collections

      Most cited references 179

      • Record: found
      • Abstract: not found
      • Article: not found

      Measuring inconsistency in meta-analyses.

        Bookmark
        • Record: found
        • Abstract: found
        • Article: not found

        Quantifying heterogeneity in a meta-analysis.

        The extent of heterogeneity in a meta-analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between-study variance, but interpretation is then specific to a particular treatment effect metric. A test for the existence of heterogeneity exists, but depends on the number of studies in the meta-analysis. We develop measures of the impact of heterogeneity on a meta-analysis, from mathematical criteria, that are independent of the number of studies and the treatment effect metric. We derive and propose three suitable statistics: H is the square root of the chi2 heterogeneity statistic divided by its degrees of freedom; R is the ratio of the standard error of the underlying mean from a random effects meta-analysis to the standard error of a fixed effect meta-analytic estimate, and I2 is a transformation of (H) that describes the proportion of total variation in study estimates that is due to heterogeneity. We discuss interpretation, interval estimates and other properties of these measures and examine them in five example data sets showing different amounts of heterogeneity. We conclude that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity. One or both should be presented in published meta-analyses in preference to the test for heterogeneity. Copyright 2002 John Wiley & Sons, Ltd.
          Bookmark
          • Record: found
          • Abstract: found
          • Article: not found

          Bias in meta-analysis detected by a simple, graphical test.

          Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews. Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution.
            Bookmark

            Author and article information

            Affiliations
            [1 ]Università di Modena e Reggio Emilia, Modena, Italy
            [2 ]Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy
            [3 ]Centre for Statistics in Medicine, University of Oxford, Oxford
            [4 ]Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
            [5 ]Annals of Internal Medicine, Philadelphia, Pennsylvania, USA
            [6 ]Nordic Cochrane Centre, Copenhagen, Denmark
            [7 ]Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece
            [8 ]UK Cochrane Centre, Oxford
            [9 ]School of Nursing and Midwifery, Trinity College, Dublin, Republic of Ireland
            [10 ]Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
            [11 ]Kleijnen Systematic Reviews, York
            [12 ]School for Public Health and Primary Care (CAPHRI), University of Maastricht, Maastricht, Netherlands
            [13 ]Department of Epidemiology and Community Medicine, Faculty of Medicine, Ottawa, Ontario, Canada
            Author notes
            Correspondence to: alesslib@ 123456mailbase.it
            Journal
            BMJ
            bmj
            BMJ : British Medical Journal
            BMJ Publishing Group Ltd.
            0959-8138
            1468-5833
            2009
            2009
            21 July 2009
            : 339
            2714672
            19622552
            liba626515
            10.1136/bmj.b2700
            © Liberati et al 2009

            This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

            Categories
            Research Methods & Reporting

            Medicine

            Comments

            Comment on this article