+1 Recommend
0 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


          High quality protocols facilitate proper conduct, reporting, and external review of clinical trials. However, the completeness of trial protocols is often inadequate. To help improve the content and quality of protocols, an international group of stakeholders developed the SPIRIT 2013 Statement (Standard Protocol Items: Recommendations for Interventional Trials). The SPIRIT Statement provides guidance in the form of a checklist of recommended items to include in a clinical trial protocol.

          This SPIRIT 2013 Explanation and Elaboration paper provides important information to promote full understanding of the checklist recommendations. For each checklist item, we provide a rationale and detailed description; a model example from an actual protocol; and relevant references supporting its importance. We strongly recommend that this explanatory paper be used in conjunction with the SPIRIT Statement. A website of resources is also available (

          The SPIRIT 2013 Explanation and Elaboration paper, together with the Statement, should help with the drafting of trial protocols. Complete documentation of key trial elements can facilitate transparency and protocol review for the benefit of all stakeholders.

          Related collections

          Most cited references 328

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials

          The CONSORT statement is used worldwide to improve the reporting of randomised controlled trials. Kenneth Schulz and colleagues describe the latest version, CONSORT 2010, which updates the reporting guideline based on new methodological evidence and accumulating experience
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls

            Most studies have some missing data. Jonathan Sterne and colleagues describe the appropriate use and reporting of the multiple imputation approach to dealing with them
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials

              Abstract Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document—intended to enhance the use, understanding, and dissemination of the CONSORT statement—has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website ( should be helpful resources to improve reporting of randomised trials. “The whole of medicine depends on the transparent reporting of clinical trials.”1 Well designed and properly executed randomised controlled trials (RCTs) provide the most reliable evidence on the efficacy of healthcare interventions, but trials with inadequate methods are associated with bias, especially exaggerated treatment effects.2 3 4 5 Biased results from poorly designed and reported trials can mislead decision making in health care at all levels, from treatment decisions for a patient to formulation of national public health policies. Critical appraisal of the quality of clinical trials is possible only if the design, conduct, and analysis of RCTs are thoroughly and accurately described in the report. Far from being transparent, the reporting of RCTs is often incomplete,6 7 8 9 compounding problems arising from poor methodology.10 11 12 13 14 15 Incomplete and inaccurate reporting Many reviews have documented deficiencies in reports of clinical trials. For example, information on the method used in a trial to assign participants to comparison groups was reported in only 21% of 519 trial reports indexed in PubMed in 2000,16 and only 34% of 616 reports indexed in 2006.17 Similarly, only 45% of trial reports indexed in PubMed in 200016 and 53% in 200617 defined a primary end point, and only 27% in 2000 and 45% in 2006 reported a sample size calculation. Reporting is not only often incomplete but also sometimes inaccurate. Of 119 reports stating that all participants were included in the analysis in the groups to which they were originally assigned (intention-to-treat analysis), 15 (13%) excluded patients or did not analyse all patients as allocated.18 Many other reviews have found that inadequate reporting is common in specialty journals16 19 and journals published in languages other than English.20 21 Proper randomisation reduces selection bias at trial entry and is the crucial component of high quality RCTs.22 Successful randomisation hinges on two steps: generation of an unpredictable allocation sequence and concealment of this sequence from the investigators enrolling participants (see box 1).2 23 Box 1: Treatment allocation. What’s so special about randomisation? The method used to assign interventions to trial participants is a crucial aspect of clinical trial design. Random assignment is the preferred method; it has been successfully used regularly in trials for more than 50 years.24 Randomisation has three major advantages.25 First, when properly implemented, it eliminates selection bias, balancing both known and unknown prognostic factors, in the assignment of treatments. Without randomisation, treatment comparisons may be prejudiced, whether consciously or not, by selection of participants of a particular kind to receive a particular treatment. Second, random assignment permits the use of probability theory to express the likelihood that any difference in outcome between intervention groups merely reflects chance.26 Third, random allocation, in some situations, facilitates blinding the identity of treatments to the investigators, participants, and evaluators, possibly by use of a placebo, which reduces bias after assignment of treatments.27 Of these three advantages, reducing selection bias at trial entry is usually the most important.28 Successful randomisation in practice depends on two interrelated aspects—adequate generation of an unpredictable allocation sequence and concealment of that sequence until assignment occurs.2 23 A key issue is whether the schedule is known or predictable by the people involved in allocating participants to the comparison groups.29 The treatment allocation system should thus be set up so that the person enrolling participants does not know in advance which treatment the next person will get, a process termed allocation concealment.2 23 Proper allocation concealment shields knowledge of forthcoming assignments, whereas proper random sequences prevent correct anticipation of future assignments based on knowledge of past assignments. Unfortunately, despite that central role, reporting of the methods used for allocation of participants to interventions is also generally inadequate. For example, 5% of 206 reports of supposed RCTs in obstetrics and gynaecology journals described studies that were not truly randomised.23 This estimate is conservative, as most reports do not at present provide adequate information about the method of allocation.20 23 30 31 32 33 Improving the reporting of RCTs: the CONSORT statement DerSimonian and colleagues suggested that “editors could greatly improve the reporting of clinical trials by providing authors with a list of items that they expected to be strictly reported.”34 Early in the 1990s, two groups of journal editors, trialists, and methodologists independently published recommendations on the reporting of trials.35 36 In a subsequent editorial, Rennie urged the two groups to meet and develop a common set of recommendations 37; the outcome was the CONSORT statement (Consolidated Standards of Reporting Trials).38 The CONSORT statement (or simply CONSORT) comprises a checklist of essential items that should be included in reports of RCTs and a diagram for documenting the flow of participants through a trial. It is aimed at primary reports of RCTs with two group, parallel designs. Most of CONSORT is also relevant to a wider class of trial designs, such as non-inferiority, equivalence, factorial, cluster, and crossover trials. Extensions to the CONSORT checklist for reporting trials with some of these designs have been published,39 40 41 as have those for reporting certain types of data (harms 42), types of interventions (non-pharmacological treatments 43, herbal interventions44), and abstracts.45 The objective of CONSORT is to provide guidance to authors about how to improve the reporting of their trials. Trial reports need be clear, complete, and transparent. Readers, peer reviewers, and editors can also use CONSORT to help them critically appraise and interpret reports of RCTs. However, CONSORT was not meant to be used as a quality assessment instrument. Rather, the content of CONSORT focuses on items related to the internal and external validity of trials. Many items not explicitly mentioned in CONSORT should also be included in a report, such as information about approval by an ethics committee, obtaining informed consent from participants, and, where relevant, existence of a data safety and monitoring committee. In addition, any other aspects of a trial that are mentioned should be properly reported, such as information pertinent to cost effectiveness analysis.46 47 48 Since its publication in 1996, CONSORT has been supported by more than 400 journals ( and several editorial groups, such as the International Committee of Medical Journal Editors.49 The introduction of CONSORT within journals is associated with improved quality of reports of RCTs.17 50 51 However, CONSORT is an ongoing initiative, and the CONSORT statement is revised periodically.3 CONSORT was last revised nine years ago, in 2001.52 53 54 Since then the evidence base to inform CONSORT has grown considerably; empirical data have highlighted new concerns regarding the reporting of RCTs, such as selective outcome reporting.55 56 57 A CONSORT Group meeting was therefore convened in January 2007, in Canada, to revise the 2001 CONSORT statement and its accompanying explanation and elaboration document. The revised checklist is shown in table 1 and the flow diagram, not revised, in fig 1.52 53 54 Table 1  CONSORT 2010 checklist of information to include when reporting a randomised trial* Section/Topic Item No Checklist item Reported on page No Title and abstract 1a Identification as a randomised trial in the title 1b Structured summary of trial design, methods, results, and conclusions (for specific guidance see CONSORT for abstracts45 65) Introduction Background and objectives 2a Scientific background and explanation of rationale 2b Specific objectives or hypotheses Methods Trial design 3a Description of trial design (such as parallel, factorial) including allocation ratio 3b Important changes to methods after trial commencement (such as eligibility criteria), with reasons Participants 4a Eligibility criteria for participants 4b Settings and locations where the data were collected Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually administered Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed 6b Any changes to trial outcomes after the trial commenced, with reasons Sample size 7a How sample size was determined 7b When applicable, explanation of any interim analyses and stopping guidelines Randomisation:  Sequence generation 8a Method used to generate the random allocation sequence 8b Type of randomisation; details of any restriction (such as blocking and block size)  Allocation concealment mechanism 9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned  Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions Blinding 11a If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how 11b If relevant, description of the similarity of interventions Statistical methods 12a Statistical methods used to compare groups for primary and secondary outcomes 12b Methods for additional analyses, such as subgroup analyses and adjusted analyses Results Participant flow (a diagram is strongly recommended) 13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome 13b For each group, losses and exclusions after randomisation, together with reasons Recruitment 14a Dates defining the periods of recruitment and follow-up 14b Why the trial ended or was stopped Baseline data 15 A table showing baseline demographic and clinical characteristics for each group Numbers analysed 16 For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups Outcomes and estimation 17a For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) 17b For binary outcomes, presentation of both absolute and relative effect sizes is recommended Ancillary analyses 18 Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory Harms 19 All important harms or unintended effects in each group (for specific guidance see CONSORT for harms42) Discussion Limitations 20 Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses Generalisability 21 Generalisability (external validity, applicability) of the trial findings Interpretation 22 Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence Other information Registration 23 Registration number and name of trial registry Protocol 24 Where the full trial protocol can be accessed, if available Funding 25 Sources of funding and other support (such as supply of drugs), role of funders *We strongly recommend reading this statement in conjunction with the CONSORT 2010 Explanation and Elaboration for important clarifications on all the items. If relevant, we also recommend reading CONSORT extensions for cluster randomised trials,40 non-inferiority and equivalence trials,39 non-pharmacological treatments,43 herbal interventions,44 and pragmatic trials.41 Additional extensions are forthcoming: for those and for up to date references relevant to this checklist, see Fig 1 Flow diagram of the progress through the phases of a parallel randomised trial of two groups (that is, enrolment, intervention allocation, follow-up, and data analysis)52 53 54 The CONSORT 2010 Statement: explanation and elaboration During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. The CONSORT explanation and elaboration article58 was published in 2001 alongside the 2001 version of the CONSORT statement. It discussed the rationale and scientific background for each item and provided published examples of good reporting. The rationale for revising that article is similar to that for revising the statement, described above. We briefly describe below the main additions and deletions to this version of the explanation and elaboration article. The CONSORT 2010 Explanation and Elaboration: changes We have made several substantive and some cosmetic changes to this version of the CONSORT explanatory document (full details are highlighted in the 2010 version of the CONSORT statement59). Some reflect changes to the CONSORT checklist; there are three new checklist items in the CONSORT 2010 checklist—such as item 24, which asks authors to report where their trial protocol can be accessed. We have also updated some existing explanations, including adding more recent references to methodological evidence, and used some better examples. We have removed the glossary, which is now available on the CONSORT website ( Where possible, we describe the findings of relevant empirical studies. Many excellent books on clinical trials offer fuller discussion of methodological issues.60 61 62 Finally, for convenience, we sometimes refer to “treatments” and “patients,” although we recognise that not all interventions evaluated in RCTs are treatments and not all participants are patients. Checklist items Title and abstract Item 1a. Identification as a randomised trial in the title. Example—“Smoking reduction with oral nicotine inhalers: double blind, randomised clinical trial of efficacy and safety.”63 Explanation—The ability to identify a report of a randomised trial in an electronic database depends to a large extent on how it was indexed. Indexers may not classify a report as a randomised trial if the authors do not explicitly report this information.64 To help ensure that a study is appropriately indexed and easily identified, authors should use the word “randomised” in the title to indicate that the participants were randomly assigned to their comparison groups. Item 1b. Structured summary of trial design, methods, results, and conclusions For specific guidance see CONSORT for abstracts.45 65 Explanation—Clear, transparent, and sufficiently detailed abstracts are important because readers often base their assessment of a trial on such information. Some readers use an abstract as a screening tool to decide whether to read the full article. However, as not all trials are freely available and some health professionals do not have access to the full trial reports, healthcare decisions are sometimes made on the basis of abstracts of randomised trials.66 A journal abstract should contain sufficient information about a trial to serve as an accurate record of its conduct and findings, providing optimal information about the trial within the space constraints and format of a journal. A properly constructed and written abstract helps individuals to assess quickly the relevance of the findings and aids the retrieval of relevant reports from electronic databases.67 The abstract should accurately reflect what is included in the full journal article and should not include information that does not appear in the body of the paper. Studies comparing the accuracy of information reported in a journal abstract with that reported in the text of the full publication have found claims that are inconsistent with, or missing from, the body of the full article.68 69 70 71 Conversely, omitting important harms from the abstract could seriously mislead someone’s interpretation of the trial findings.42 72 A recent extension to the CONSORT statement provides a list of essential items that authors should include when reporting the main results of a randomised trial in a journal (or conference) abstract (see table 2).45 We strongly recommend the use of structured abstracts for reporting randomised trials. They provide readers with information about the trial under a series of headings pertaining to the design, conduct, analysis, and interpretation.73 Some studies have found that structured abstracts are of higher quality than the more traditional descriptive abstracts74 75 and that they allow readers to find information more easily.76 We recognise that many journals have developed their own structure and word limit for reporting abstracts. It is not our intention to suggest changes to these formats, but to recommend what information should be reported. Table 2  Items to include when reporting a randomised trial in a journal abstract Item Description Authors Contact details for the corresponding author Trial design Description of the trial design (such as parallel, cluster, non-inferiority) Methods:  Participants Eligibility criteria for participants and the settings where the data were collected  Interventions Interventions intended for each group  Objective Specific objective or hypothesis  Outcome Clearly defined primary outcome for this report  Randomisation How participants were allocated to interventions  Blinding (masking) Whether participants, care givers, and those assessing the outcomes were blinded to group assignment Results:  Numbers randomised Number of participants randomised to each group  Recruitment Trial status  Numbers analysed Number of participants analysed in each group  Outcome For the primary outcome, a result for each group and the estimated effect size and its precision  Harms Important adverse events or side effects Conclusions General interpretation of the results Trial registration Registration number and name of trial register Funding Source of funding Introduction Item 2a. Scientific background and explanation of rationale Example—“Surgery is the treatment of choice for patients with disease stage I and II non-small cell lung cancer (NSCLC) … An NSCLC meta-analysis combined the results from eight randomised trials of surgery versus surgery plus adjuvant cisplatin-based chemotherapy and showed a small, but not significant (p=0.08), absolute survival benefit of around 5% at 5 years (from 50% to 55%). At the time the current trial was designed (mid-1990s), adjuvant chemotherapy had not become standard clinical practice … The clinical rationale for neo-adjuvant chemotherapy is three-fold: regression of the primary cancer could be achieved thereby facilitating and simplifying or reducing subsequent surgery; undetected micro-metastases could be dealt with at the start of treatment; and there might be inhibition of the putative stimulus to residual cancer by growth factors released by surgery and by subsequent wound healing … The current trial was therefore set up to compare, in patients with resectable NSCLC, surgery alone versus three cycles of platinum-based chemotherapy followed by surgery in terms of overall survival, quality of life, pathological staging, resectability rates, extent of surgery, and time to and site of relapse.”77 Explanation—Typically, the introduction consists of free flowing text, in which authors explain the scientific background and rationale for their trial, and its general outline. It may also be appropriate to include here the objectives of the trial (see item 2b).The rationale may be explanatory (for example, to assess the possible influence of a drug on renal function) or pragmatic (for example, to guide practice by comparing the benefits and harms of two treatments). Authors should report any evidence of the benefits and harms of active interventions included in a trial and should suggest a plausible explanation for how the interventions might work, if this is not obvious.78 The Declaration of Helsinki states that biomedical research involving people should be based on a thorough knowledge of the scientific literature.79 That is, it is unethical to expose humans unnecessarily to the risks of research. Some clinical trials have been shown to have been unnecessary because the question they addressed had been or could have been answered by a systematic review of the existing literature.80 81 Thus, the need for a new trial should be justified in the introduction. Ideally, it should include a reference to a systematic review of previous similar trials or a note of the absence of such trials.82 Item 2b. Specific objectives or hypotheses Example—“In the current study we tested the hypothesis that a policy of active management of nulliparous labour would: 1. reduce the rate of caesarean section, 2. reduce the rate of prolonged labour; 3. not influence maternal satisfaction with the birth experience.”83 Explanation—Objectives are the questions that the trial was designed to answer. They often relate to the efficacy of a particular therapeutic or preventive intervention. Hypotheses are pre-specified questions being tested to help meet the objectives. Hypotheses are more specific than objectives and are amenable to explicit statistical evaluation. In practice, objectives and hypotheses are not always easily differentiated. Most reports of RCTs provide adequate information about trial objectives and hypotheses.84 Methods Item 3a. Description of trial design (such as parallel, factorial) including allocation ratio Example—“This was a multicenter, stratified (6 to 11 years and 12 to 17 years of age, with imbalanced randomisation [2:1]), double-blind, placebo-controlled, parallel-group study conducted in the United States (41 sites).”85 Explanation—The word “design” is often used to refer to all aspects of how a trial is set up, but it also has a narrower interpretation. Many specific aspects of the broader trial design, including details of randomisation and blinding, are addressed elsewhere in the CONSORT checklist. Here we seek information on the type of trial, such as parallel group or factorial, and the conceptual framework, such as superiority or non-inferiority, and other related issues not addressed elsewhere in the checklist. The CONSORT statement focuses mainly on trials with participants individually randomised to one of two “parallel” groups. In fact, little more than half of published trials have such a design.16 The main alternative designs are multi-arm parallel, crossover, cluster,40 and factorial designs. Also, most trials are set to identify the superiority of a new intervention, if it exists, but others are designed to assess non-inferiority or equivalence.39 It is important that researchers clearly describe these aspects of their trial, including the unit of randomisation (such as patient, GP practice, lesion). It is desirable also to include these details in the abstract (see item 1b). If a less common design is employed, authors are encouraged to explain their choice, especially as such designs may imply the need for a larger sample size or more complex analysis and interpretation. Although most trials use equal randomisation (such as 1:1 for two groups), it is helpful to provide the allocation ratio explicitly. For drug trials, specifying the phase of the trial (I-IV) may also be relevant. Item 3b. Important changes to methods after trial commencement (such as eligibility criteria), with reasons Example—“Patients were randomly assigned to one of six parallel groups, initially in 1:1:1:1:1:1 ratio, to receive either one of five otamixaban … regimens … or an active control of unfractionated heparin … an independent Data Monitoring Committee reviewed unblinded data for patient safety; no interim analyses for efficacy or futility were done. During the trial, this committee recommended that the group receiving the lowest dose of otamixaban (0·035 mg/kg/h) be discontinued because of clinical evidence of inadequate anticoagulation. The protocol was immediately amended in accordance with that recommendation, and participants were subsequently randomly assigned in 2:2:2:2:1 ratio to the remaining otamixaban and control groups, respectively.”86 Explanation—A few trials may start without any fixed plan (that is, are entirely exploratory), but the most will have a protocol that specifies in great detail how the trial will be conducted. There may be deviations from the original protocol, as it is impossible to predict every possible change in circumstances during the course of a trial. Some trials will therefore have important changes to the methods after trial commencement. Changes could be due to external information becoming available from other studies, or internal financial difficulties, or could be due to a disappointing recruitment rate. Such protocol changes should be made without breaking the blinding on the accumulating data on participants’ outcomes. In some trials, an independent data monitoring committee will have as part of its remit the possibility of recommending protocol changes based on seeing unblinded data. Such changes might affect the study methods (such as changes to treatment regimens, eligibility criteria, randomisation ratio, or duration of follow-up) or trial conduct (such as dropping a centre with poor data quality).87 Some trials are set up with a formal “adaptive” design. There is no universally accepted definition of these designs, but a working definition might be “a multistage study design that uses accumulating data to decide how to modify aspects of the study without undermining the validity and integrity of the trial.”88 The modifications are usually to the sample sizes and the number of treatment arms and can lead to decisions being made more quickly and with more efficient use of resources. There are, however, important ethical, statistical, and practical issues in considering such a design.89 90 Whether the modifications are explicitly part of the trial design or in response to changing circumstances, it is essential that they are fully reported to help the reader interpret the results. Changes from protocols are not currently well reported. A review of comparisons with protocols showed that about half of journal articles describing RCTs had an unexplained discrepancy in the primary outcomes.57 Frequent unexplained discrepancies have also been observed for details of randomisation, blinding,91 and statistical analyses.92 Item 4a. Eligibility criteria for participants Example—“Eligible participants were all adults aged 18 or over with HIV who met the eligibility criteria for antiretroviral therapy according to the Malawian national HIV treatment guidelines (WHO clinical stage III or IV or any WHO stage with a CD4 count 2 group) parallel group trials need the least modification of the standard CONSORT guidance. The flow diagram can be extended easily. The main differences from trials with two groups relate to clarification of how the study hypotheses relate to the multiple groups, and the consequent methods of data analysis and interpretation. For factorial trials, the possibility of interaction between the interventions generally needs to be considered. In addition to overall comparisons of participants who did or did not receive each intervention under study, investigators should consider also reporting results for each treatment combination.303 In crossover trials, each participant receives two (or more) treatments in a random order. The main additional issues to address relate to the paired nature of the data, which affect design and analysis.304 Similar issues affect within-person comparisons, in which participants receive two treatments simultaneously (often to paired organs). Also, because of the risk of temporal or systemic carryover effects, respectively, in both cases the choice of design needs justification. The CONSORT Group intends to publish extensions to CONSORT to cover all these designs. In addition, we will publish updates to existing guidance for cluster randomised trials and non-inferiority and equivalence trials to take account of this major update of the generic CONSORT guidance. Discussion Assessment of healthcare interventions can be misleading unless investigators ensure unbiased comparisons. Random allocation to study groups remains the only method that eliminates selection and confounding biases. Non-randomised trials tend to result in larger estimated treatment effects than randomised trials.305 306 Bias jeopardises even RCTs, however, if investigators carry out such trials improperly.307 A recent systematic review, aggregating the results of several methodological investigations, found that, for subjective outcomes, trials that used inadequate or unclear allocation concealment yielded 31% larger estimates of effect than those that used adequate concealment, and trials that were not blinded yielded 25% larger estimates.153 As might be expected, there was a strong association between the two. The design and implementation of an RCT require methodological as well as clinical expertise, meticulous effort,143 308 and a high level of alertness for unanticipated difficulties. Reports of RCTs should be written with similarly close attention to reducing bias. Readers should not have to speculate; the methods used should be complete and transparent so that readers can readily differentiate trials with unbiased results from those with questionable results. Sound science encompasses adequate reporting, and the conduct of ethical trials rests on the footing of sound science.309 We hope this update of the CONSORT explanatory article will assist authors in using the 2010 version of CONSORT and explain in general terms the importance of adequately reporting of trials. The CONSORT statement can help researchers designing trials in future310 and can guide peer reviewers and editors in their evaluation of manuscripts. Indeed, we encourage peer reviewers and editors to use the CONSORT checklist to assess whether authors have reported on these items. Such assessments will likely improve the clarity and transparency of published trials. Because CONSORT is an evolving document, it requires a dynamic process of continual assessment, refinement, and, if necessary, change, which is why we have this update of the checklist and explanatory article. As new evidence and critical comments accumulate, we will evaluate the need for future updates. The first version of the CONSORT statement, from 1996, seems to have led to improvement in the quality of reporting of RCTs in the journals that have adopted it.50 51 52 53 54. Other groups are using the CONSORT template to improve the reporting of other research designs, such as diagnostic tests311 and observational studies.312 The CONSORT website ( has been established to provide educational material and a repository database of materials relevant to the reporting of RCTs. The site includes many examples from real trials, including all of the examples included in this article. We will continue to add good and bad examples of reporting to the database, and we invite readers to submit further suggestions by contacting us through the website. The CONSORT Group will continue to survey the literature to find relevant articles that address issues relevant to the reporting of RCTs, and we invite authors of any such articles to notify us about them. All of this information will be made accessible through the CONSORT website, which is updated regularly. More than 400 leading general and specialty journals and biomedical editorial groups, including the ICMJE, World Association of Medical Journal Editors, and the Council of Science Editors, have given their official support to CONSORT. We invite other journals concerned about the quality of reporting of clinical trials to endorse the CONSORT statement and contact us through our website to let us know of their support. The ultimate benefactors of these collective efforts should be people who, for whatever reason, require intervention from the healthcare community.

                Author and article information

                Role: Phelan scientist
                Role: research coordinator
                Role: professor and director
                Role: professor and director
                Role: programme associate
                Role: vice president, epidemiology
                Role: professor and director
                Role: senior researcher
                Role: distinguished scientist
                Role: associate professor
                Role: adjunct professor
                Role: professor
                Role: senior scientist
                BMJ : British Medical Journal
                BMJ Publishing Group Ltd.
                9 January 2013
                : 346
                [1 ]Women’s College Research Institute at Women’s College Hospital, Department of Medicine, University of Toronto, Toronto, Canada, M5G 1N8
                [2 ]Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
                [3 ]Nordic Cochrane Centre, Rigshospitalet, Copenhagen, Denmark
                [4 ]Centre for Statistics in Medicine, University of Oxford, Oxford, UK
                [5 ]Division of Medical Ethics and Humanities, University of Utah School of Medicine, Salt Lake City, USA
                [6 ]Janssen Research and Development, Titusville, USA
                [7 ]Center for Clinical Trials, Johns Hopkins Bloomberg School of Public Health, Baltimore, USA
                [8 ]Quantitative Sciences, FHI 360, Research Triangle Park, USA
                [9 ]NCIC Clinical Trials Group, Cancer Research Institute, Queen’s University, Kingston, Canada
                [10 ]Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada
                [11 ]Keenan Research Centre at the Li Ka Shing Knowledge Institute of St Michael’s Hospital, Faculty of Medicine, University of Toronto, Toronto, Canada
                Author notes
                Correspondence to: A-W Chan anwen.chan@
                © Chan et al 2013

                This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: and

                Research Methods & Reporting



                Comment on this article