89
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Reporting and Methods in Clinical Prediction Research: A Systematic Review

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Walter Bouwmeester and colleagues investigated the reporting and methods of prediction studies in 2008, in six high-impact general medical journals, and found that the majority of prediction studies do not follow current methodological recommendations.

          Abstract

          Background

          We investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.

          Methods and Findings

          We used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of p<0.05 to select predictors in the multivariable analyses (29%). Predictive model performance measures, i.e., calibration and discrimination, were reported in 12% and 27% of studies, respectively.

          Conclusions

          The majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability.

          Please see later in the article for the Editors' Summary

          Editors' Summary

          Background

          There are often times in our lives when we would like to be able to predict the future. Is the stock market going to go up, for example, or will it rain tomorrow? Being able predict future health is also important, both to patients and to physicians, and there is an increasing body of published clinical “prediction research.” Diagnostic prediction research investigates the ability of variables or test results to predict the presence or absence of a specific diagnosis. So, for example, one recent study compared the ability of two imaging techniques to diagnose pulmonary embolism (a blood clot in the lungs). Prognostic prediction research investigates the ability of various markers to predict future outcomes such as the risk of a heart attack. Both types of prediction research can investigate the predictive properties of patient characteristics, single variables, tests, or markers, or combinations of variables, tests, or markers (multivariable studies). Both types of prediction research can include also studies that build multivariable prediction models to guide patient management (model development), or that test the performance of models (validation), or that quantify the effect of using a prediction model on patient and physician behaviors and outcomes (impact assessment).

          Why Was This Study Done?

          With the increase in prediction research, there is an increased interest in the methodology of this type of research because poorly done or poorly reported prediction research is likely to have limited reliability and applicability and will, therefore, be of little use in patient management. In this systematic review, the researchers investigate the reporting and methods of prediction studies by examining the aims, design, participant selection, definition and measurement of outcomes and candidate predictors, statistical power and analyses, and performance measures included in multivariable prediction research articles published in 2008 in several general medical journals. In a systematic review, researchers identify all the studies undertaken on a given topic using a predefined set of criteria and systematically analyze the reported methods and results of these studies.

          What Did the Researchers Do and Find?

          The researchers identified all the multivariable prediction studies meeting their predefined criteria that were published in 2008 in six high impact general medical journals by browsing through all the issues of the journals (a hand search). They then scored the methods and reporting of each study using a comprehensive item list based on recent recommendations for the conduct of prediction research (for example, the reporting recommendations for tumor marker prognostic studies—the REMARK guidelines). Of 71 retrieved studies, 51 were predictor finding studies, 14 were prediction model development studies, three externally validated an existing model, and three reported on a model's impact on participant outcome. Study design, participant selection, definitions of outcomes and predictors, and predictor selection were generally well reported, but other methodological and reporting aspects of the studies were suboptimal. For example, despite many recommendations, continuous predictors were often dichotomized. That is, rather than using the measured value of a variable in a prediction model (for example, blood pressure in a cardiovascular disease prediction model), measurements were frequently assigned to two broad categories. Similarly, many of the studies failed to adequately estimate the sample size needed to minimize bias in predictor effects, and few of the model development papers quantified and validated the proposed model's predictive performance.

          What Do These Findings Mean?

          These findings indicate that, in 2008, most of the prediction research published in high impact general medical journals failed to follow current guidelines for the conduct and reporting of clinical prediction studies. Because the studies examined here were published in high impact medical journals, they are likely to be representative of the higher quality studies published in 2008. However, reporting standards may have improved since 2008, and the conduct of prediction research may actually be better than this analysis suggests because the length restrictions that are often applied to journal articles may account for some of reporting omissions. Nevertheless, despite some encouraging findings, the researchers conclude that the poor reporting and poor methods they found in many published prediction studies is a cause for concern and is likely to limit the reliability and applicability of this type of clinical research.

          Additional Information

          Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001221.

          Related collections

          Most cited references86

          • Record: found
          • Abstract: found
          • Article: not found

          The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies.

          Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September, 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles.18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies.A detailed explanation and elaboration document is published separately and is freely available on the websites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE statement will contribute to improving the quality of reporting of observational studies
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors.

            Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can result in models that poorly fit the dataset at hand, or, even more likely, inaccurately predict outcomes on new subjects. One must know how to measure qualities of a model's fit in order to avoid poorly fitted or overfitted models. Measurement of predictive accuracy can be difficult for survival time data in the presence of censoring. We discuss an easily interpretable index of predictive discrimination as well as methods for assessing calibration of predicted survival probabilities. Both types of predictive accuracy should be unbiasedly validated using bootstrapping or cross-validation, before using predictions in a new data series. We discuss some of the hazards of poorly fitted and overfitted regression models and present one modelling strategy that avoids many of the problems discussed. The methods described are applicable to all regression models, but are particularly needed for binary, ordinal, and time-to-event outcomes. Methods are illustrated with a survival analysis in prostate cancer using Cox regression.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials

              Abstract Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document—intended to enhance the use, understanding, and dissemination of the CONSORT statement—has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials. “The whole of medicine depends on the transparent reporting of clinical trials.”1 Well designed and properly executed randomised controlled trials (RCTs) provide the most reliable evidence on the efficacy of healthcare interventions, but trials with inadequate methods are associated with bias, especially exaggerated treatment effects.2 3 4 5 Biased results from poorly designed and reported trials can mislead decision making in health care at all levels, from treatment decisions for a patient to formulation of national public health policies. Critical appraisal of the quality of clinical trials is possible only if the design, conduct, and analysis of RCTs are thoroughly and accurately described in the report. Far from being transparent, the reporting of RCTs is often incomplete,6 7 8 9 compounding problems arising from poor methodology.10 11 12 13 14 15 Incomplete and inaccurate reporting Many reviews have documented deficiencies in reports of clinical trials. For example, information on the method used in a trial to assign participants to comparison groups was reported in only 21% of 519 trial reports indexed in PubMed in 2000,16 and only 34% of 616 reports indexed in 2006.17 Similarly, only 45% of trial reports indexed in PubMed in 200016 and 53% in 200617 defined a primary end point, and only 27% in 2000 and 45% in 2006 reported a sample size calculation. Reporting is not only often incomplete but also sometimes inaccurate. Of 119 reports stating that all participants were included in the analysis in the groups to which they were originally assigned (intention-to-treat analysis), 15 (13%) excluded patients or did not analyse all patients as allocated.18 Many other reviews have found that inadequate reporting is common in specialty journals16 19 and journals published in languages other than English.20 21 Proper randomisation reduces selection bias at trial entry and is the crucial component of high quality RCTs.22 Successful randomisation hinges on two steps: generation of an unpredictable allocation sequence and concealment of this sequence from the investigators enrolling participants (see box 1).2 23 Box 1: Treatment allocation. What’s so special about randomisation? The method used to assign interventions to trial participants is a crucial aspect of clinical trial design. Random assignment is the preferred method; it has been successfully used regularly in trials for more than 50 years.24 Randomisation has three major advantages.25 First, when properly implemented, it eliminates selection bias, balancing both known and unknown prognostic factors, in the assignment of treatments. Without randomisation, treatment comparisons may be prejudiced, whether consciously or not, by selection of participants of a particular kind to receive a particular treatment. Second, random assignment permits the use of probability theory to express the likelihood that any difference in outcome between intervention groups merely reflects chance.26 Third, random allocation, in some situations, facilitates blinding the identity of treatments to the investigators, participants, and evaluators, possibly by use of a placebo, which reduces bias after assignment of treatments.27 Of these three advantages, reducing selection bias at trial entry is usually the most important.28 Successful randomisation in practice depends on two interrelated aspects—adequate generation of an unpredictable allocation sequence and concealment of that sequence until assignment occurs.2 23 A key issue is whether the schedule is known or predictable by the people involved in allocating participants to the comparison groups.29 The treatment allocation system should thus be set up so that the person enrolling participants does not know in advance which treatment the next person will get, a process termed allocation concealment.2 23 Proper allocation concealment shields knowledge of forthcoming assignments, whereas proper random sequences prevent correct anticipation of future assignments based on knowledge of past assignments. Unfortunately, despite that central role, reporting of the methods used for allocation of participants to interventions is also generally inadequate. For example, 5% of 206 reports of supposed RCTs in obstetrics and gynaecology journals described studies that were not truly randomised.23 This estimate is conservative, as most reports do not at present provide adequate information about the method of allocation.20 23 30 31 32 33 Improving the reporting of RCTs: the CONSORT statement DerSimonian and colleagues suggested that “editors could greatly improve the reporting of clinical trials by providing authors with a list of items that they expected to be strictly reported.”34 Early in the 1990s, two groups of journal editors, trialists, and methodologists independently published recommendations on the reporting of trials.35 36 In a subsequent editorial, Rennie urged the two groups to meet and develop a common set of recommendations 37; the outcome was the CONSORT statement (Consolidated Standards of Reporting Trials).38 The CONSORT statement (or simply CONSORT) comprises a checklist of essential items that should be included in reports of RCTs and a diagram for documenting the flow of participants through a trial. It is aimed at primary reports of RCTs with two group, parallel designs. Most of CONSORT is also relevant to a wider class of trial designs, such as non-inferiority, equivalence, factorial, cluster, and crossover trials. Extensions to the CONSORT checklist for reporting trials with some of these designs have been published,39 40 41 as have those for reporting certain types of data (harms 42), types of interventions (non-pharmacological treatments 43, herbal interventions44), and abstracts.45 The objective of CONSORT is to provide guidance to authors about how to improve the reporting of their trials. Trial reports need be clear, complete, and transparent. Readers, peer reviewers, and editors can also use CONSORT to help them critically appraise and interpret reports of RCTs. However, CONSORT was not meant to be used as a quality assessment instrument. Rather, the content of CONSORT focuses on items related to the internal and external validity of trials. Many items not explicitly mentioned in CONSORT should also be included in a report, such as information about approval by an ethics committee, obtaining informed consent from participants, and, where relevant, existence of a data safety and monitoring committee. In addition, any other aspects of a trial that are mentioned should be properly reported, such as information pertinent to cost effectiveness analysis.46 47 48 Since its publication in 1996, CONSORT has been supported by more than 400 journals (www.consort-statement.org) and several editorial groups, such as the International Committee of Medical Journal Editors.49 The introduction of CONSORT within journals is associated with improved quality of reports of RCTs.17 50 51 However, CONSORT is an ongoing initiative, and the CONSORT statement is revised periodically.3 CONSORT was last revised nine years ago, in 2001.52 53 54 Since then the evidence base to inform CONSORT has grown considerably; empirical data have highlighted new concerns regarding the reporting of RCTs, such as selective outcome reporting.55 56 57 A CONSORT Group meeting was therefore convened in January 2007, in Canada, to revise the 2001 CONSORT statement and its accompanying explanation and elaboration document. The revised checklist is shown in table 1 and the flow diagram, not revised, in fig 1.52 53 54 Table 1  CONSORT 2010 checklist of information to include when reporting a randomised trial* Section/Topic Item No Checklist item Reported on page No Title and abstract 1a Identification as a randomised trial in the title 1b Structured summary of trial design, methods, results, and conclusions (for specific guidance see CONSORT for abstracts45 65) Introduction Background and objectives 2a Scientific background and explanation of rationale 2b Specific objectives or hypotheses Methods Trial design 3a Description of trial design (such as parallel, factorial) including allocation ratio 3b Important changes to methods after trial commencement (such as eligibility criteria), with reasons Participants 4a Eligibility criteria for participants 4b Settings and locations where the data were collected Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually administered Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed 6b Any changes to trial outcomes after the trial commenced, with reasons Sample size 7a How sample size was determined 7b When applicable, explanation of any interim analyses and stopping guidelines Randomisation:  Sequence generation 8a Method used to generate the random allocation sequence 8b Type of randomisation; details of any restriction (such as blocking and block size)  Allocation concealment mechanism 9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned  Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions Blinding 11a If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how 11b If relevant, description of the similarity of interventions Statistical methods 12a Statistical methods used to compare groups for primary and secondary outcomes 12b Methods for additional analyses, such as subgroup analyses and adjusted analyses Results Participant flow (a diagram is strongly recommended) 13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome 13b For each group, losses and exclusions after randomisation, together with reasons Recruitment 14a Dates defining the periods of recruitment and follow-up 14b Why the trial ended or was stopped Baseline data 15 A table showing baseline demographic and clinical characteristics for each group Numbers analysed 16 For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups Outcomes and estimation 17a For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) 17b For binary outcomes, presentation of both absolute and relative effect sizes is recommended Ancillary analyses 18 Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory Harms 19 All important harms or unintended effects in each group (for specific guidance see CONSORT for harms42) Discussion Limitations 20 Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses Generalisability 21 Generalisability (external validity, applicability) of the trial findings Interpretation 22 Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence Other information Registration 23 Registration number and name of trial registry Protocol 24 Where the full trial protocol can be accessed, if available Funding 25 Sources of funding and other support (such as supply of drugs), role of funders *We strongly recommend reading this statement in conjunction with the CONSORT 2010 Explanation and Elaboration for important clarifications on all the items. If relevant, we also recommend reading CONSORT extensions for cluster randomised trials,40 non-inferiority and equivalence trials,39 non-pharmacological treatments,43 herbal interventions,44 and pragmatic trials.41 Additional extensions are forthcoming: for those and for up to date references relevant to this checklist, see www.consort-statement.org. Fig 1 Flow diagram of the progress through the phases of a parallel randomised trial of two groups (that is, enrolment, intervention allocation, follow-up, and data analysis)52 53 54 The CONSORT 2010 Statement: explanation and elaboration During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. The CONSORT explanation and elaboration article58 was published in 2001 alongside the 2001 version of the CONSORT statement. It discussed the rationale and scientific background for each item and provided published examples of good reporting. The rationale for revising that article is similar to that for revising the statement, described above. We briefly describe below the main additions and deletions to this version of the explanation and elaboration article. The CONSORT 2010 Explanation and Elaboration: changes We have made several substantive and some cosmetic changes to this version of the CONSORT explanatory document (full details are highlighted in the 2010 version of the CONSORT statement59). Some reflect changes to the CONSORT checklist; there are three new checklist items in the CONSORT 2010 checklist—such as item 24, which asks authors to report where their trial protocol can be accessed. We have also updated some existing explanations, including adding more recent references to methodological evidence, and used some better examples. We have removed the glossary, which is now available on the CONSORT website (www.consort-statement.org). Where possible, we describe the findings of relevant empirical studies. Many excellent books on clinical trials offer fuller discussion of methodological issues.60 61 62 Finally, for convenience, we sometimes refer to “treatments” and “patients,” although we recognise that not all interventions evaluated in RCTs are treatments and not all participants are patients. Checklist items Title and abstract Item 1a. Identification as a randomised trial in the title. Example—“Smoking reduction with oral nicotine inhalers: double blind, randomised clinical trial of efficacy and safety.”63 Explanation—The ability to identify a report of a randomised trial in an electronic database depends to a large extent on how it was indexed. Indexers may not classify a report as a randomised trial if the authors do not explicitly report this information.64 To help ensure that a study is appropriately indexed and easily identified, authors should use the word “randomised” in the title to indicate that the participants were randomly assigned to their comparison groups. Item 1b. Structured summary of trial design, methods, results, and conclusions For specific guidance see CONSORT for abstracts.45 65 Explanation—Clear, transparent, and sufficiently detailed abstracts are important because readers often base their assessment of a trial on such information. Some readers use an abstract as a screening tool to decide whether to read the full article. However, as not all trials are freely available and some health professionals do not have access to the full trial reports, healthcare decisions are sometimes made on the basis of abstracts of randomised trials.66 A journal abstract should contain sufficient information about a trial to serve as an accurate record of its conduct and findings, providing optimal information about the trial within the space constraints and format of a journal. A properly constructed and written abstract helps individuals to assess quickly the relevance of the findings and aids the retrieval of relevant reports from electronic databases.67 The abstract should accurately reflect what is included in the full journal article and should not include information that does not appear in the body of the paper. Studies comparing the accuracy of information reported in a journal abstract with that reported in the text of the full publication have found claims that are inconsistent with, or missing from, the body of the full article.68 69 70 71 Conversely, omitting important harms from the abstract could seriously mislead someone’s interpretation of the trial findings.42 72 A recent extension to the CONSORT statement provides a list of essential items that authors should include when reporting the main results of a randomised trial in a journal (or conference) abstract (see table 2).45 We strongly recommend the use of structured abstracts for reporting randomised trials. They provide readers with information about the trial under a series of headings pertaining to the design, conduct, analysis, and interpretation.73 Some studies have found that structured abstracts are of higher quality than the more traditional descriptive abstracts74 75 and that they allow readers to find information more easily.76 We recognise that many journals have developed their own structure and word limit for reporting abstracts. It is not our intention to suggest changes to these formats, but to recommend what information should be reported. Table 2  Items to include when reporting a randomised trial in a journal abstract Item Description Authors Contact details for the corresponding author Trial design Description of the trial design (such as parallel, cluster, non-inferiority) Methods:  Participants Eligibility criteria for participants and the settings where the data were collected  Interventions Interventions intended for each group  Objective Specific objective or hypothesis  Outcome Clearly defined primary outcome for this report  Randomisation How participants were allocated to interventions  Blinding (masking) Whether participants, care givers, and those assessing the outcomes were blinded to group assignment Results:  Numbers randomised Number of participants randomised to each group  Recruitment Trial status  Numbers analysed Number of participants analysed in each group  Outcome For the primary outcome, a result for each group and the estimated effect size and its precision  Harms Important adverse events or side effects Conclusions General interpretation of the results Trial registration Registration number and name of trial register Funding Source of funding Introduction Item 2a. Scientific background and explanation of rationale Example—“Surgery is the treatment of choice for patients with disease stage I and II non-small cell lung cancer (NSCLC) … An NSCLC meta-analysis combined the results from eight randomised trials of surgery versus surgery plus adjuvant cisplatin-based chemotherapy and showed a small, but not significant (p=0.08), absolute survival benefit of around 5% at 5 years (from 50% to 55%). At the time the current trial was designed (mid-1990s), adjuvant chemotherapy had not become standard clinical practice … The clinical rationale for neo-adjuvant chemotherapy is three-fold: regression of the primary cancer could be achieved thereby facilitating and simplifying or reducing subsequent surgery; undetected micro-metastases could be dealt with at the start of treatment; and there might be inhibition of the putative stimulus to residual cancer by growth factors released by surgery and by subsequent wound healing … The current trial was therefore set up to compare, in patients with resectable NSCLC, surgery alone versus three cycles of platinum-based chemotherapy followed by surgery in terms of overall survival, quality of life, pathological staging, resectability rates, extent of surgery, and time to and site of relapse.”77 Explanation—Typically, the introduction consists of free flowing text, in which authors explain the scientific background and rationale for their trial, and its general outline. It may also be appropriate to include here the objectives of the trial (see item 2b).The rationale may be explanatory (for example, to assess the possible influence of a drug on renal function) or pragmatic (for example, to guide practice by comparing the benefits and harms of two treatments). Authors should report any evidence of the benefits and harms of active interventions included in a trial and should suggest a plausible explanation for how the interventions might work, if this is not obvious.78 The Declaration of Helsinki states that biomedical research involving people should be based on a thorough knowledge of the scientific literature.79 That is, it is unethical to expose humans unnecessarily to the risks of research. Some clinical trials have been shown to have been unnecessary because the question they addressed had been or could have been answered by a systematic review of the existing literature.80 81 Thus, the need for a new trial should be justified in the introduction. Ideally, it should include a reference to a systematic review of previous similar trials or a note of the absence of such trials.82 Item 2b. Specific objectives or hypotheses Example—“In the current study we tested the hypothesis that a policy of active management of nulliparous labour would: 1. reduce the rate of caesarean section, 2. reduce the rate of prolonged labour; 3. not influence maternal satisfaction with the birth experience.”83 Explanation—Objectives are the questions that the trial was designed to answer. They often relate to the efficacy of a particular therapeutic or preventive intervention. Hypotheses are pre-specified questions being tested to help meet the objectives. Hypotheses are more specific than objectives and are amenable to explicit statistical evaluation. In practice, objectives and hypotheses are not always easily differentiated. Most reports of RCTs provide adequate information about trial objectives and hypotheses.84 Methods Item 3a. Description of trial design (such as parallel, factorial) including allocation ratio Example—“This was a multicenter, stratified (6 to 11 years and 12 to 17 years of age, with imbalanced randomisation [2:1]), double-blind, placebo-controlled, parallel-group study conducted in the United States (41 sites).”85 Explanation—The word “design” is often used to refer to all aspects of how a trial is set up, but it also has a narrower interpretation. Many specific aspects of the broader trial design, including details of randomisation and blinding, are addressed elsewhere in the CONSORT checklist. Here we seek information on the type of trial, such as parallel group or factorial, and the conceptual framework, such as superiority or non-inferiority, and other related issues not addressed elsewhere in the checklist. The CONSORT statement focuses mainly on trials with participants individually randomised to one of two “parallel” groups. In fact, little more than half of published trials have such a design.16 The main alternative designs are multi-arm parallel, crossover, cluster,40 and factorial designs. Also, most trials are set to identify the superiority of a new intervention, if it exists, but others are designed to assess non-inferiority or equivalence.39 It is important that researchers clearly describe these aspects of their trial, including the unit of randomisation (such as patient, GP practice, lesion). It is desirable also to include these details in the abstract (see item 1b). If a less common design is employed, authors are encouraged to explain their choice, especially as such designs may imply the need for a larger sample size or more complex analysis and interpretation. Although most trials use equal randomisation (such as 1:1 for two groups), it is helpful to provide the allocation ratio explicitly. For drug trials, specifying the phase of the trial (I-IV) may also be relevant. Item 3b. Important changes to methods after trial commencement (such as eligibility criteria), with reasons Example—“Patients were randomly assigned to one of six parallel groups, initially in 1:1:1:1:1:1 ratio, to receive either one of five otamixaban … regimens … or an active control of unfractionated heparin … an independent Data Monitoring Committee reviewed unblinded data for patient safety; no interim analyses for efficacy or futility were done. During the trial, this committee recommended that the group receiving the lowest dose of otamixaban (0·035 mg/kg/h) be discontinued because of clinical evidence of inadequate anticoagulation. The protocol was immediately amended in accordance with that recommendation, and participants were subsequently randomly assigned in 2:2:2:2:1 ratio to the remaining otamixaban and control groups, respectively.”86 Explanation—A few trials may start without any fixed plan (that is, are entirely exploratory), but the most will have a protocol that specifies in great detail how the trial will be conducted. There may be deviations from the original protocol, as it is impossible to predict every possible change in circumstances during the course of a trial. Some trials will therefore have important changes to the methods after trial commencement. Changes could be due to external information becoming available from other studies, or internal financial difficulties, or could be due to a disappointing recruitment rate. Such protocol changes should be made without breaking the blinding on the accumulating data on participants’ outcomes. In some trials, an independent data monitoring committee will have as part of its remit the possibility of recommending protocol changes based on seeing unblinded data. Such changes might affect the study methods (such as changes to treatment regimens, eligibility criteria, randomisation ratio, or duration of follow-up) or trial conduct (such as dropping a centre with poor data quality).87 Some trials are set up with a formal “adaptive” design. There is no universally accepted definition of these designs, but a working definition might be “a multistage study design that uses accumulating data to decide how to modify aspects of the study without undermining the validity and integrity of the trial.”88 The modifications are usually to the sample sizes and the number of treatment arms and can lead to decisions being made more quickly and with more efficient use of resources. There are, however, important ethical, statistical, and practical issues in considering such a design.89 90 Whether the modifications are explicitly part of the trial design or in response to changing circumstances, it is essential that they are fully reported to help the reader interpret the results. Changes from protocols are not currently well reported. A review of comparisons with protocols showed that about half of journal articles describing RCTs had an unexplained discrepancy in the primary outcomes.57 Frequent unexplained discrepancies have also been observed for details of randomisation, blinding,91 and statistical analyses.92 Item 4a. Eligibility criteria for participants Example—“Eligible participants were all adults aged 18 or over with HIV who met the eligibility criteria for antiretroviral therapy according to the Malawian national HIV treatment guidelines (WHO clinical stage III or IV or any WHO stage with a CD4 count 2 group) parallel group trials need the least modification of the standard CONSORT guidance. The flow diagram can be extended easily. The main differences from trials with two groups relate to clarification of how the study hypotheses relate to the multiple groups, and the consequent methods of data analysis and interpretation. For factorial trials, the possibility of interaction between the interventions generally needs to be considered. In addition to overall comparisons of participants who did or did not receive each intervention under study, investigators should consider also reporting results for each treatment combination.303 In crossover trials, each participant receives two (or more) treatments in a random order. The main additional issues to address relate to the paired nature of the data, which affect design and analysis.304 Similar issues affect within-person comparisons, in which participants receive two treatments simultaneously (often to paired organs). Also, because of the risk of temporal or systemic carryover effects, respectively, in both cases the choice of design needs justification. The CONSORT Group intends to publish extensions to CONSORT to cover all these designs. In addition, we will publish updates to existing guidance for cluster randomised trials and non-inferiority and equivalence trials to take account of this major update of the generic CONSORT guidance. Discussion Assessment of healthcare interventions can be misleading unless investigators ensure unbiased comparisons. Random allocation to study groups remains the only method that eliminates selection and confounding biases. Non-randomised trials tend to result in larger estimated treatment effects than randomised trials.305 306 Bias jeopardises even RCTs, however, if investigators carry out such trials improperly.307 A recent systematic review, aggregating the results of several methodological investigations, found that, for subjective outcomes, trials that used inadequate or unclear allocation concealment yielded 31% larger estimates of effect than those that used adequate concealment, and trials that were not blinded yielded 25% larger estimates.153 As might be expected, there was a strong association between the two. The design and implementation of an RCT require methodological as well as clinical expertise, meticulous effort,143 308 and a high level of alertness for unanticipated difficulties. Reports of RCTs should be written with similarly close attention to reducing bias. Readers should not have to speculate; the methods used should be complete and transparent so that readers can readily differentiate trials with unbiased results from those with questionable results. Sound science encompasses adequate reporting, and the conduct of ethical trials rests on the footing of sound science.309 We hope this update of the CONSORT explanatory article will assist authors in using the 2010 version of CONSORT and explain in general terms the importance of adequately reporting of trials. The CONSORT statement can help researchers designing trials in future310 and can guide peer reviewers and editors in their evaluation of manuscripts. Indeed, we encourage peer reviewers and editors to use the CONSORT checklist to assess whether authors have reported on these items. Such assessments will likely improve the clarity and transparency of published trials. Because CONSORT is an evolving document, it requires a dynamic process of continual assessment, refinement, and, if necessary, change, which is why we have this update of the checklist and explanatory article. As new evidence and critical comments accumulate, we will evaluate the need for future updates. The first version of the CONSORT statement, from 1996, seems to have led to improvement in the quality of reporting of RCTs in the journals that have adopted it.50 51 52 53 54. Other groups are using the CONSORT template to improve the reporting of other research designs, such as diagnostic tests311 and observational studies.312 The CONSORT website (www.consort-statement.org) has been established to provide educational material and a repository database of materials relevant to the reporting of RCTs. The site includes many examples from real trials, including all of the examples included in this article. We will continue to add good and bad examples of reporting to the database, and we invite readers to submit further suggestions by contacting us through the website. The CONSORT Group will continue to survey the literature to find relevant articles that address issues relevant to the reporting of RCTs, and we invite authors of any such articles to notify us about them. All of this information will be made accessible through the CONSORT website, which is updated regularly. More than 400 leading general and specialty journals and biomedical editorial groups, including the ICMJE, World Association of Medical Journal Editors, and the Council of Science Editors, have given their official support to CONSORT. We invite other journals concerned about the quality of reporting of clinical trials to endorse the CONSORT statement and contact us through our website to let us know of their support. The ultimate benefactors of these collective efforts should be people who, for whatever reason, require intervention from the healthcare community.
                Bookmark

                Author and article information

                Contributors
                Role: Academic Editor
                Journal
                PLoS Med
                PLoS Med
                PLoS
                plosmed
                PLoS Medicine
                Public Library of Science (San Francisco, USA )
                1549-1277
                1549-1676
                May 2012
                May 2012
                22 May 2012
                : 9
                : 5
                : e1001221
                Affiliations
                [1 ]Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, the Netherlands
                [2 ]Department of Primary Care Health Sciences, University of Oxford, Oxford, United Kingdom
                [3 ]Department of Public Health, Erasmus MC, Rotterdam, The Netherlands
                [4 ]Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom
                University of Edinburgh, United Kingdom
                Author notes

                Analyzed the data: WB NPAZ SM KGMM. Wrote the first draft of the manuscript: WB NPAZ. Contributed to the writing of the manuscript: WB NPAZ SM MIG YV EWS DGA KGMM. ICMJE criteria for authorship read and met: WB NPAZ SM MIG YV EWS DGA KGMM. Agree with manuscript results and conclusions: WB NPAZ SM MIG YV EWS DGA KGMM.

                Article
                PMEDICINE-D-11-01439
                10.1371/journal.pmed.1001221
                3358324
                22629234
                10b9cc37-5665-4399-91de-d1003f7ad7d8
                Bouwmeester et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
                History
                : 20 June 2011
                : 13 April 2012
                Page count
                Pages: 13
                Categories
                Research Article
                Medicine
                Clinical Research Design
                Epidemiology

                Medicine
                Medicine

                Comments

                Comment on this article