49
views
0
recommends
+1 Recommend
2 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Hormonal Contraception and the Risk of HIV Acquisition: An Individual Participant Data Meta-analysis

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In a meta-analysis of individual participant data, Charles Morrison and colleagues explore the association between hormonal contraception use and risk of HIV infection in sub-Saharan Africa.

          Abstract

          Background

          Observational studies of a putative association between hormonal contraception (HC) and HIV acquisition have produced conflicting results. We conducted an individual participant data (IPD) meta-analysis of studies from sub-Saharan Africa to compare the incidence of HIV infection in women using combined oral contraceptives (COCs) or the injectable progestins depot-medroxyprogesterone acetate (DMPA) or norethisterone enanthate (NET-EN) with women not using HC.

          Methods and Findings

          Eligible studies measured HC exposure and incident HIV infection prospectively using standardized measures, enrolled women aged 15–49 y, recorded ≥15 incident HIV infections, and measured prespecified covariates. Our primary analysis estimated the adjusted hazard ratio (aHR) using two-stage random effects meta-analysis, controlling for region, marital status, age, number of sex partners, and condom use. We included 18 studies, including 37,124 women (43,613 woman-years) and 1,830 incident HIV infections. Relative to no HC use, the aHR for HIV acquisition was 1.50 (95% CI 1.24–1.83) for DMPA use, 1.24 (95% CI 0.84–1.82) for NET-EN use, and 1.03 (95% CI 0.88–1.20) for COC use. Between-study heterogeneity was mild (I 2 < 50%). DMPA use was associated with increased HIV acquisition compared with COC use (aHR 1.43, 95% CI 1.23–1.67) and NET-EN use (aHR 1.32, 95% CI 1.08–1.61). Effect estimates were attenuated for studies at lower risk of methodological bias (compared with no HC use, aHR for DMPA use 1.22, 95% CI 0.99–1.50; for NET-EN use 0.67, 95% CI 0.47–0.96; and for COC use 0.91, 95% CI 0.73–1.41) compared to those at higher risk of bias (p interaction = 0.003). Neither age nor herpes simplex virus type 2 infection status modified the HC–HIV relationship.

          Conclusions

          This IPD meta-analysis found no evidence that COC or NET-EN use increases women’s risk of HIV but adds to the evidence that DMPA may increase HIV risk, underscoring the need for additional safe and effective contraceptive options for women at high HIV risk. A randomized controlled trial would provide more definitive evidence about the effects of hormonal contraception, particularly DMPA, on HIV risk.

          Editors’ Summary

          Background

          AIDS has killed about 36 million people since the first recorded case of the disease in 1981. About 35 million people (including 25 million living in sub-Saharan Africa) are currently infected with HIV, the virus that causes AIDS, and every year, another 2.3 million people become newly infected with HIV. At the beginning of the epidemic, more men than women were infected with HIV. Now, about half of all adults infected with HIV are women. In 2013, almost 60% of all new HIV infections among young people aged 15–24 years occurred among women, and it is estimated that, worldwide, 50 young women are newly infected with HIV every hour. Most women become infected with HIV through unprotected intercourse with an infected male partner—biologically, women are twice as likely to become infected through unprotected intercourse as men. A woman’s risk of becoming infected with HIV can be reduced by abstaining from sex, by having one or a few partners, and by always using condoms.

          Why Was This Study Done?

          Women and societies both benefit from effective contraception. When contraception is available, women can avoid unintended pregnancies, fewer women and babies die during pregnancy and childbirth, and maternal and infant health improves. However, some (but not all) observational studies (investigations that measure associations between the characteristics of participants and their subsequent development of specific diseases) have reported an association between hormonal contraceptive use and an increased risk of HIV acquisition by women. So, does hormonal contraception increase the risk of HIV acquisition among women or not? Here, to investigate this question, the researchers undertake an individual participant data meta-analysis of studies conducted in sub-Saharan Africa (a region where both HIV infection and unintended pregnancies are common) to compare the incidence of HIV infection (the number of new cases in a population during a given time period) among women using and not using hormonal contraception. Meta-analysis is a statistical method that combines the results of several studies; an individual participant data meta-analysis combines the data recorded for each individual involved in the studies rather than the aggregated results from each study.

          What Did the Researchers Do and Find?

          The researchers included 18 studies that measured hormonal contraceptive use and incident HIV infection among women aged 15–49 years living in sub-Saharan Africa in their meta-analysis. More than 37,000 women took part in these studies, and 1,830 became newly infected with HIV. Half of the women were not using hormonal contraception, a quarter were using depot-medroxyprogesterone acetate (DMPA; an injectable hormonal contraceptive), and the remainder were using combined oral contraceptives (COCs) or norethisterone enanthate (NET-EN, another injectable contraceptive). After adjustment for other factors likely to influence HIV acquisition (for example, condom use), women using DMPA had a 1.5-fold increased risk of HIV acquisition compared to women not using hormonal contraception. There was a slightly increased risk of HIV acquisition among women using NET-EN compared to women not using hormonal contraception, but this increase was not statistically significant (it may have happened by chance alone). There was no increased risk of HIV acquisition associated with COC use. DMPA use was associated with a 1.43-fold and 1.32-fold increased risk of HIV acquisition compared with COC and NET-EN use, respectively. Finally, neither age nor herpes simplex virus 2 infection status modified the effect of hormonal contraceptive use on HIV acquisition.

          What Do These Findings Mean?

          The findings of this individual patient data meta-analysis provide no evidence that COC or NET-EN use increases a woman’s risk of acquiring HIV, but add to the evidence suggesting that DMPA use increases the risk of HIV acquisition. These findings are likely to be more accurate than those of previous meta-analyses that used aggregated data but are likely to be limited by the quality, design, and representativeness of the studies included in the analysis. These findings nevertheless highlight the need to develop additional safe and effective contraceptive options for women at risk of HIV, particularly those living in sub-Saharan Africa, where although contraceptive use is generally low, DMPA is the most widely used hormonal contraceptive. In addition, these findings highlight the need to initiate randomized controlled trials to provide more definitive evidence of the effects of hormonal contraception, particularly DMPA, on HIV risk.

          Additional Information.

          Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001778.

          Related collections

          Most cited references88

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement

          Introduction Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field [1],[2], and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research [3], and some health care journals are moving in this direction [4]. As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in four leading medical journals in 1985 and 1986 and found that none met all eight explicit scientific criteria, such as a quality assessment of included studies [5]. In 1987, Sacks and colleagues [6] evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in six domains. Reporting was generally poor; between one and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement [7]. In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials [8]. In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1: Conceptual Issues in the Evolution from QUOROM to PRISMA Completing a Systematic Review Is an Iterative Process The conduct of a systematic review depends heavily on the scope and quality of included studies: thus systematic reviewers may need to modify their original review protocol during its conduct. Any systematic review reporting guideline should recommend that such changes can be reported and explained without suggesting that they are inappropriate. The PRISMA Statement (Items 5, 11, 16, and 23) acknowledges this iterative process. Aside from Cochrane reviews, all of which should have a protocol, only about 10% of systematic reviewers report working from a protocol [22]. Without a protocol that is publicly accessible, it is difficult to judge between appropriate and inappropriate modifications. Conduct and Reporting Research Are Distinct Concepts This distinction is, however, less straightforward for systematic reviews than for assessments of the reporting of an individual study, because the reporting and conduct of systematic reviews are, by nature, closely intertwined. For example, the failure of a systematic review to report the assessment of the risk of bias in included studies may be seen as a marker of poor conduct, given the importance of this activity in the systematic review process [37]. Study-Level Versus Outcome-Level Assessment of Risk of Bias For studies included in a systematic review, a thorough assessment of the risk of bias requires both a “study-level” assessment (e.g., adequacy of allocation concealment) and, for some features, a newer approach called “outcome-level” assessment. An outcome-level assessment involves evaluating the reliability and validity of the data for each important outcome by determining the methods used to assess them in each individual study [38]. The quality of evidence may differ across outcomes, even within a study, such as between a primary efficacy outcome, which is likely to be very carefully and systematically measured, and the assessment of serious harms [39], which may rely on spontaneous reports by investigators. This information should be reported to allow an explicit assessment of the extent to which an estimate of effect is correct [38]. Importance of Reporting Biases Different types of reporting biases may hamper the conduct and interpretation of systematic reviews. Selective reporting of complete studies (e.g., publication bias) [28] as well as the more recently empirically demonstrated “outcome reporting bias” within individual studies [40],[41] should be considered by authors when conducting a systematic review and reporting its results. Though the implications of these biases on the conduct and reporting of systematic reviews themselves are unclear, some previous research has identified that selective outcome reporting may occur also in the context of systematic reviews [42]. Terminology The terminology used to describe a systematic review and meta-analysis has evolved over time. One reason for changing the name from QUOROM to PRISMA was the desire to encompass both systematic reviews and meta-analyses. We have adopted the definitions used by the Cochrane Collaboration [9]. A systematic review is a review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and to collect and analyze data from the studies that are included in the review. Statistical methods (meta-analysis) may or may not be used to analyze and summarize the results of the included studies. Meta-analysis refers to the use of statistical techniques in a systematic review to integrate the results of included studies. Developing the PRISMA Statement A three-day meeting was held in Ottawa, Canada, in June 2005 with 29 participants, including review authors, methodologists, clinicians, medical editors, and a consumer. The objective of the Ottawa meeting was to revise and expand the QUOROM checklist and flow diagram, as needed. The executive committee completed the following tasks, prior to the meeting: a systematic review of studies examining the quality of reporting of systematic reviews, and a comprehensive literature search to identify methodological and other articles that might inform the meeting, especially in relation to modifying checklist items. An international survey of review authors, consumers, and groups commissioning or using systematic reviews and meta-analyses was completed, including the International Network of Agencies for Health Technology Assessment (INAHTA) and the Guidelines International Network (GIN). The survey aimed to ascertain views of QUOROM, including the merits of the existing checklist items. The results of these activities were presented during the meeting and are summarized on the PRISMA Web site (http://www.prisma-statement.org/). Only items deemed essential were retained or added to the checklist. Some additional items are nevertheless desirable, and review authors should include these, if relevant [10]. For example, it is useful to indicate whether the systematic review is an update [11] of a previous review, and to describe any changes in procedures from those described in the original protocol. Shortly after the meeting a draft of the PRISMA checklist was circulated to the group, including those invited to the meeting but unable to attend. A disposition file was created containing comments and revisions from each respondent, and the checklist was subsequently revised 11 times. The group approved the checklist, flow diagram, and this summary paper. Although no direct evidence was found to support retaining or adding some items, evidence from other domains was believed to be relevant. For example, Item 5 asks authors to provide registration information about the systematic review, including a registration number, if available. Although systematic review registration is not yet widely available [12],[13], the participating journals of the International Committee of Medical Journal Editors (ICMJE) [14] now require all clinical trials to be registered in an effort to increase transparency and accountability [15]. Those aspects are also likely to benefit systematic reviewers, possibly reducing the risk of an excessive number of reviews addressing the same question [16],[17] and providing greater transparency when updating systematic reviews. The PRISMA Statement The PRISMA Statement consists of a 27-item checklist (Table 1; see also Text S1 for a downloadable Word template for researchers to re-use) and a four-phase flow diagram (Figure 1; see also Figure S1 for a downloadable Word template for researchers to re-use). The aim of the PRISMA Statement is to help authors improve the reporting of systematic reviews and meta-analyses. We have focused on randomized trials, but PRISMA can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions. PRISMA may also be useful for critical appraisal of published systematic reviews. However, the PRISMA checklist is not a quality assessment instrument to gauge the quality of a systematic review. 10.1371/journal.pmed.1000097.g001 Figure 1 Flow of information through the different phases of a systematic review. 10.1371/journal.pmed.1000097.t001 Table 1 Checklist of items to include when reporting a systematic review or meta-analysis. Section/Topic # Checklist Item Reported on Page # TITLE Title 1 Identify the report as a systematic review, meta-analysis, or both. ABSTRACT Structured summary 2 Provide a structured summary including, as applicable: background; objectives; data sources; study eligibility criteria, participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications of key findings; systematic review registration number. INTRODUCTION Rationale 3 Describe the rationale for the review in the context of what is already known. Objectives 4 Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS). METHODS Protocol and registration 5 Indicate if a review protocol exists, if and where it can be accessed (e.g., Web address), and, if available, provide registration information including registration number. Eligibility criteria 6 Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g., years considered, language, publication status) used as criteria for eligibility, giving rationale. Information sources 7 Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched. Search 8 Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated. Study selection 9 State the process for selecting studies (i.e., screening, eligibility, included in systematic review, and, if applicable, included in the meta-analysis). Data collection process 10 Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and any processes for obtaining and confirming data from investigators. Data items 11 List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions and simplifications made. Risk of bias in individual studies 12 Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at the study or outcome level), and how this information is to be used in any data synthesis. Summary measures 13 State the principal summary measures (e.g., risk ratio, difference in means). Synthesis of results 14 Describe the methods of handling data and combining results of studies, if done, including measures of consistency (e.g., I2) for each meta-analysis. Risk of bias across studies 15 Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting within studies). Additional analyses 16 Describe methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression), if done, indicating which were pre-specified. RESULTS Study selection 17 Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram. Study characteristics 18 For each study, present characteristics for which data were extracted (e.g., study size, PICOS, follow-up period) and provide the citations. Risk of bias within studies 19 Present data on risk of bias of each study and, if available, any outcome-level assessment (see Item 12). Results of individual studies 20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group and (b) effect estimates and confidence intervals, ideally with a forest plot. Synthesis of results 21 Present results of each meta-analysis done, including confidence intervals and measures of consistency. Risk of bias across studies 22 Present results of any assessment of risk of bias across studies (see Item 15). Additional analysis 23 Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression [see Item 16]). DISCUSSION Summary of evidence 24 Summarize the main findings including the strength of evidence for each main outcome; consider their relevance to key groups (e.g., health care providers, users, and policy makers). Limitations 25 Discuss limitations at study and outcome level (e.g., risk of bias), and at review level (e.g., incomplete retrieval of identified research, reporting bias). Conclusions 26 Provide a general interpretation of the results in the context of other evidence, and implications for future research. FUNDING Funding 27 Describe sources of funding for the systematic review and other support (e.g., supply of data); role of funders for the systematic review. From QUOROM to PRISMA The new PRISMA checklist differs in several respects from the QUOROM checklist, and the substantive specific changes are highlighted in Table 2. Generally, the PRISMA checklist “decouples” several items present in the QUOROM checklist and, where applicable, several checklist items are linked to improve consistency across the systematic review report. 10.1371/journal.pmed.1000097.t002 Table 2 Substantive specific changes between the QUOROM checklist and the PRISMA checklist (a tick indicates the presence of the topic in QUOROM or PRISMA). Section/Topic Item QUOROM PRISMA Comment Abstract √ √ QUOROM and PRISMA ask authors to report an abstract. However, PRISMA is not specific about format. Introduction Objective √ This new item (4) addresses the explicit question the review addresses using the PICO reporting system (which describes the participants, interventions, comparisons, and outcome(s) of the systematic review), together with the specification of the type of study design (PICOS); the item is linked to Items 6, 11, and 18 of the checklist. Methods Protocol √ This new item (5) asks authors to report whether the review has a protocol and if so how it can be accessed. Methods Search √ √ Although reporting the search is present in both QUOROM and PRISMA checklists, PRISMA asks authors to provide a full description of at least one electronic search strategy (Item 8). Without such information it is impossible to repeat the authors' search. Methods Assessment of risk of bias in included studies √ √ Renamed from “quality assessment” in QUOROM. This item (12) is linked with reporting this information in the results (Item 19). The new concept of “outcome-level” assessment has been introduced. Methods Assessment of risk of bias across studies √ This new item (15) asks authors to describe any assessments of risk of bias in the review, such as selective reporting within the included studies. This item is linked with reporting this information in the results (Item 22). Discussion √ √ Although both QUOROM and PRISMA checklists address the discussion section, PRISMA devotes three items (24–26) to the discussion. In PRISMA the main types of limitations are explicitly stated and their discussion required. Funding √ This new item (27) asks authors to provide information on any sources of funding for the systematic review. The flow diagram has also been modified. Before including studies and providing reasons for excluding others, the review team must first search the literature. This search results in records. Once these records have been screened and eligibility criteria applied, a smaller number of articles will remain. The number of included articles might be smaller (or larger) than the number of studies, because articles may report on multiple studies and results from a particular study may be published in several articles. To capture this information, the PRISMA flow diagram now requests information on these phases of the review process. Endorsement The PRISMA Statement should replace the QUOROM Statement for those journals that have endorsed QUOROM. We hope that other journals will support PRISMA; they can do so by registering on the PRISMA Web site. To underscore to authors, and others, the importance of transparent reporting of systematic reviews, we encourage supporting journals to reference the PRISMA Statement and include the PRISMA Web address in their Instructions to Authors. We also invite editorial organizations to consider endorsing PRISMA and encourage authors to adhere to its principles. The PRISMA Explanation and Elaboration Paper In addition to the PRISMA Statement, a supporting Explanation and Elaboration document has been produced [18] following the style used for other reporting guidelines [19]–[21]. The process of completing this document included developing a large database of exemplars to highlight how best to report each checklist item, and identifying a comprehensive evidence base to support the inclusion of each checklist item. The Explanation and Elaboration document was completed after several face to face meetings and numerous iterations among several meeting participants, after which it was shared with the whole group for additional revisions and final approval. Finally, the group formed a dissemination subcommittee to help disseminate and implement PRISMA. Discussion The quality of reporting of systematic reviews is still not optimal [22]–[27]. In a recent review of 300 systematic reviews, few authors reported assessing possible publication bias [22], even though there is overwhelming evidence both for its existence [28] and its impact on the results of systematic reviews [29]. Even when the possibility of publication bias is assessed, there is no guarantee that systematic reviewers have assessed or interpreted it appropriately [30]. Although the absence of reporting such an assessment does not necessarily indicate that it was not done, reporting an assessment of possible publication bias is likely to be a marker of the thoroughness of the conduct of the systematic review. Several approaches have been developed to conduct systematic reviews on a broader array of questions. For example, systematic reviews are now conducted to investigate cost-effectiveness [31], diagnostic [32] or prognostic questions [33], genetic associations [34], and policy making [35]. The general concepts and topics covered by PRISMA are all relevant to any systematic review, not just those whose objective is to summarize the benefits and harms of a health care intervention. However, some modifications of the checklist items or flow diagram will be necessary in particular circumstances. For example, assessing the risk of bias is a key concept, but the items used to assess this in a diagnostic review are likely to focus on issues such as the spectrum of patients and the verification of disease status, which differ from reviews of interventions. The flow diagram will also need adjustments when reporting individual patient data meta-analysis [36]. We have developed an explanatory document [18] to increase the usefulness of PRISMA. For each checklist item, this document contains an example of good reporting, a rationale for its inclusion, and supporting evidence, including references, whenever possible. We believe this document will also serve as a useful resource for those teaching systematic review methodology. We encourage journals to include reference to the explanatory document in their Instructions to Authors. Like any evidence-based endeavor, PRISMA is a living document. To this end we invite readers to comment on the revised version, particularly the new checklist and flow diagram, through the PRISMA Web site. We will use such information to inform PRISMA's continued development. Supporting Information Figure S1 Flow of information through the different phases of a systematic review (downloadable template document for researchers to re-use). (0.08 MB DOC) Click here for additional data file. Text S1 Checklist of items to include when reporting a systematic review or meta-analysis (downloadable template document for researchers to re-use). (0.04 MB DOC) Click here for additional data file.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Quantifying heterogeneity in a meta-analysis.

            The extent of heterogeneity in a meta-analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between-study variance, but interpretation is then specific to a particular treatment effect metric. A test for the existence of heterogeneity exists, but depends on the number of studies in the meta-analysis. We develop measures of the impact of heterogeneity on a meta-analysis, from mathematical criteria, that are independent of the number of studies and the treatment effect metric. We derive and propose three suitable statistics: H is the square root of the chi2 heterogeneity statistic divided by its degrees of freedom; R is the ratio of the standard error of the underlying mean from a random effects meta-analysis to the standard error of a fixed effect meta-analytic estimate, and I2 is a transformation of (H) that describes the proportion of total variation in study estimates that is due to heterogeneity. We discuss interpretation, interval estimates and other properties of these measures and examine them in five example data sets showing different amounts of heterogeneity. We conclude that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity. One or both should be presented in published meta-analyses in preference to the test for heterogeneity. Copyright 2002 John Wiley & Sons, Ltd.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group.

              Because of the pressure for timely, informed decisions in public health and clinical practice and the explosion of information in the scientific literature, research results must be synthesized. Meta-analyses are increasingly used to address this problem, and they often evaluate observational studies. A workshop was held in Atlanta, Ga, in April 1997, to examine the reporting of meta-analyses of observational studies and to make recommendations to aid authors, reviewers, editors, and readers. Twenty-seven participants were selected by a steering committee, based on expertise in clinical practice, trials, statistics, epidemiology, social sciences, and biomedical editing. Deliberations of the workshop were open to other interested scientists. Funding for this activity was provided by the Centers for Disease Control and Prevention. We conducted a systematic review of the published literature on the conduct and reporting of meta-analyses in observational studies using MEDLINE, Educational Research Information Center (ERIC), PsycLIT, and the Current Index to Statistics. We also examined reference lists of the 32 studies retrieved and contacted experts in the field. Participants were assigned to small-group discussions on the subjects of bias, searching and abstracting, heterogeneity, study categorization, and statistical methods. From the material presented at the workshop, the authors developed a checklist summarizing recommendations for reporting meta-analyses of observational studies. The checklist and supporting evidence were circulated to all conference attendees and additional experts. All suggestions for revisions were addressed. The proposed checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion. Use of the checklist should improve the usefulness of meta-analyses for authors, reviewers, editors, readers, and decision makers. An evaluation plan is suggested and research areas are explored.
                Bookmark

                Author and article information

                Contributors
                Role: Academic Editor
                Journal
                PLoS Med
                PLoS Med
                plos
                plosmed
                PLoS Medicine
                Public Library of Science (San Francisco, CA USA )
                1549-1277
                1549-1676
                January 2015
                22 January 2015
                : 12
                : 1
                : e1001778
                Affiliations
                [1 ]Clinical Sciences, FHI 360, Durham, North Carolina, United States of America
                [2 ]Biostatistics, FHI 360, Durham, North Carolina, United States of America
                [3 ]Department of Global Health, Medicine, and Epidemiology, University of Washington, Seattle, Washington, United States of America
                [4 ]Department of Epidemiology, University of California, Los Angeles, Los Angeles, California, United States of America
                [5 ]Medical Research Council, Comprehensive Clinical Trials Unit at UCL, University College London, London, United Kingdom
                [6 ]Department of Global Health, Bill & Melinda Gates Foundation, Seattle, Washington, United States of America
                [7 ]Wits Reproductive Health and HIV Institute, Johannesburg, South Africa
                [8 ]Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, London, United Kingdom
                [9 ]Population Council, New York, New York, United States of America
                [10 ]Center for the AIDS Program of Research in South Africa, University of Kwa-Zulu Natal, Durban, South Africa
                [11 ]Diversity Research Programs, Multicenter Protocols Group, Memorial Sloan-Kettering Cancer Center, New York, New York, United States of America
                [12 ]Department of Medicine, University of Toronto, Toronto, Ontario, Canada
                [13 ]Department of Social Statistics and Demography, Academic Unit of Primary Care, Population Sciences, University of Southampton, Southampton, United Kingdom
                [14 ]Division of Epidemiology and Biostatistics, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa
                [15 ]Women’s Global Health Imperative, RTI International, San Francisco, California, United States of America
                [16 ]Department of Clinical Research, London School of Hygiene & Tropical Medicine, London, United Kingdom
                [17 ]Department of Clinical Infection, Microbiology and Immunology, Institute of Infection and Global Health, University of Liverpool, Merseyside, United Kingdom
                [18 ]Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
                John Hopkins University, UNITED STATES
                Author notes

                NL is a member of the Editorial Board of PLOS Medicine. All other authors have declared that no competing interests exist.

                Conceived and designed the experiments: CSM PC NL. Performed the experiments: CSM PLC RS NL. Analyzed the data: PC CK. Wrote the paper: CSM PLC CK JMB JB AMC LVD SDM SCF BAF RJH RH SKar QAK SKap RK RSM SM NM LM HR AvdS DWJ JHHMvdW RS NL. All authors have read, and confirm that they meet, ICMJE criteria for authorship.

                Article
                PMEDICINE-D-14-01927
                10.1371/journal.pmed.1001778
                4303292
                25612136
                c7aece33-3a59-47f2-b765-03292cec3286
                Copyright @ 2015

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

                History
                : 18 June 2014
                : 4 December 2014
                Page count
                Figures: 3, Tables: 4, Pages: 26
                Funding
                CSM received funding from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health (1R21HD069192-01) and the Bill and Melinda Gates Foundation, Global Health Grant (OPP1066223). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Research Article
                Custom metadata
                We have added a table as requested ( S3 Table) with the name and contact information for each of the 18 studies of the person to contact to request the study data.

                Medicine
                Medicine

                Comments

                Comment on this article

                scite_

                Similar content63

                Cited by85

                Most referenced authors2,087