29
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      COMPare: Qualitative analysis of researchers’ responses to critical correspondence on a cohort of 58 misreported trials

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Discrepancies between pre-specified and reported outcomes are an important and prevalent source of bias in clinical trials. COMPare (Centre for Evidence-Based Medicine Outcome Monitoring Project) monitored all trials in five leading journals for correct outcome reporting, submitted correction letters on all misreported trials in real time, and then monitored responses from editors and trialists. From the trialists’ responses, we aimed to answer two related questions. First, what can trialists’ responses to corrections on their own misreported trials tell us about trialists’ knowledge of correct outcome reporting? Second, what can a cohort of responses to a standardised correction letter tell us about how researchers respond to systematic critical post-publication peer review?

          Methods

          All correspondence from trialists, published by journals in response to a correction letter from COMPare, was filed and indexed. We analysed the letters qualitatively and identified key themes in researchers’ errors about correct outcome reporting, and approaches taken by researchers when their work was criticised.

          Results

          Trialists frequently expressed views that contradicted the CONSORT (Consolidated Standards of Reporting Trials) guidelines or made inaccurate statements about correct outcome reporting. Common themes were: stating that pre-specification after trial commencement is acceptable; incorrect statements about registries; incorrect statements around the handling of multiple time points; and failure to recognise the need to report changes to pre-specified outcomes in the trial report. We identified additional themes in the approaches taken by researchers when responding to critical correspondence, including the following: ad hominem criticism; arguing that trialists should be trusted, rather than follow guidelines for trial reporting; appealing to the existence of a novel category of outcomes whose results need not necessarily be reported; incorrect statements by researchers about their own paper; and statements undermining transparency infrastructure, such as trial registers.

          Conclusions

          Researchers commonly make incorrect statements about correct trial reporting. There are recurring themes in researchers’ responses when their work is criticised, some of which fall short of the scientific ideal. Research on methodological shortcomings is now common, typically in the form of retrospective cohort studies describing the overall prevalence of a problem. We argue that prospective cohort studies which additionally issue correction letters in real time on each individual flawed study—and then follow-up responses from trialists and journals—are more impactful, more informative for those consuming the studies critiqued, more informative on the causes of shortcomings in research, and a better use of research resources.

          Electronic supplementary material

          The online version of this article (10.1186/s13063-019-3172-3) contains supplementary material, which is available to authorized users.

          Related collections

          Most cited references5

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Comparison of registered and published outcomes in randomized controlled trials: a systematic review

          Background Clinical trial registries can improve the validity of trial results by facilitating comparisons between prospectively planned and reported outcomes. Previous reports on the frequency of planned and reported outcome inconsistencies have reported widely discrepant results. It is unknown whether these discrepancies are due to differences between the included trials, or to methodological differences between studies. We aimed to systematically review the prevalence and nature of discrepancies between registered and published outcomes among clinical trials. Methods We searched MEDLINE via PubMed, EMBASE, and CINAHL, and checked references of included publications to identify studies that compared trial outcomes as documented in a publicly accessible clinical trials registry with published trial outcomes. Two authors independently selected eligible studies and performed data extraction. We present summary data rather than pooled analyses owing to methodological heterogeneity among the included studies. Results Twenty-seven studies were eligible for inclusion. The overall risk of bias among included studies was moderate to high. These studies assessed outcome agreement for a median of 65 individual trials (interquartile range [IQR] 25–110). The median proportion of trials with an identified discrepancy between the registered and published primary outcome was 31 %; substantial variability in the prevalence of these primary outcome discrepancies was observed among the included studies (range 0 % (0/66) to 100 % (1/1), IQR 17–45 %). We found less variability within the subset of studies that assessed the agreement between prospectively registered outcomes and published outcomes, among which the median observed discrepancy rate was 41 % (range 30 % (13/43) to 100 % (1/1), IQR 33–48 %). The nature of observed primary outcome discrepancies also varied substantially between included studies. Among the studies providing detailed descriptions of these outcome discrepancies, a median of 13 % of trials introduced a new, unregistered outcome in the published manuscript (IQR 5–16 %). Conclusions Discrepancies between registered and published outcomes of clinical trials are common regardless of funding mechanism or the journals in which they are published. Consistent reporting of prospectively defined outcomes and consistent utilization of registry data during the peer review process may improve the validity of clinical trial publications. Electronic supplementary material The online version of this article (doi:10.1186/s12916-015-0520-3) contains supplementary material, which is available to authorized users.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time

            Background Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. Methods We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) (New England Journal of Medicine, The Lancet, Journal of the American Medical Association, British Medical Journal, and Annals of Internal Medicine) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. Results Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. Conclusions All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major academic journals’ willingness to publish a cohort of comparable and objective correction letters on misreported high-impact studies. Suggested improvements include changes to correspondence processes at journals, alternatives for indexed post-publication peer review, changes to CONSORT’s mechanisms for enforcement, and novel strategies for research on methods and reporting. Electronic supplementary material The online version of this article (10.1186/s13063-019-3173-2) contains supplementary material, which is available to authorized users.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists

              Objectives To provide information on the frequency and reasons for outcome reporting bias in clinical trials. Design Trial protocols were compared with subsequent publication(s) to identify any discrepancies in the outcomes reported, and telephone interviews were conducted with the respective trialists to investigate more extensively the reporting of the research and the issue of unreported outcomes. Participants Chief investigators, or lead or coauthors of trials, were identified from two sources: trials published since 2002 covered in Cochrane systematic reviews where at least one trial analysed was suspected of being at risk of outcome reporting bias (issue 4, 2006; issue 1, 2007, and issue 2, 2007 of the Cochrane library); and a random sample of trial reports indexed on PubMed between August 2007 and July 2008. Setting Australia, Canada, Germany, the Netherlands, New Zealand, the United Kingdom, and the United States. Main outcome measures Frequency of incomplete outcome reporting—signified by outcomes that were specified in a trial’s protocol but not fully reported in subsequent publications—and trialists’ reasons for incomplete reporting of outcomes. Results 268 trials were identified for inclusion (183 from the cohort of Cochrane systematic reviews and 85 from PubMed). Initially, 161 respective investigators responded to our requests for interview, 130 (81%) of whom agreed to be interviewed. However, failure to achieve subsequent contact, obtain a copy of the study protocol, or both meant that final interviews were conducted with 59 (37%) of the 161 trialists. Sixteen trial investigators failed to report analysed outcomes at the time of the primary publication, 17 trialists collected outcome data that were subsequently not analysed, and five trialists did not measure a prespecified outcome over the course of the trial. In almost all trials in which prespecified outcomes had been analysed but not reported (15/16, 94%), this under-reporting resulted in bias. In nearly a quarter of trials in which prespecified outcomes had been measured but not analysed (4/17, 24%), the “direction” of the main findings influenced the investigators’ decision not to analyse the remaining data collected. In 14 (67%) of the 21 randomly selected PubMed trials, there was at least one unreported efficacy or harm outcome. More than a quarter (6/21, 29%) of these trials were found to have displayed outcome reporting bias. Conclusion The prevalence of incomplete outcome reporting is high. Trialists seemed generally unaware of the implications for the evidence base of not reporting all outcomes and protocol changes. A general lack of consensus regarding the choice of outcomes in particular clinical settings was evident and affects trial design, conduct, analysis, and reporting.
                Bookmark

                Author and article information

                Contributors
                ben.goldacre@phc.ox.ac.uk
                cicely.marston@lshtm.ac.uk
                kamal.mahtani@phc.ox.ac.uk
                carl.heneghan@phc.ox.ac.uk
                Journal
                Trials
                Trials
                Trials
                BioMed Central (London )
                1745-6215
                14 February 2019
                14 February 2019
                2019
                : 20
                : 124
                Affiliations
                [1 ]ISNI 0000 0004 1936 8948, GRID grid.4991.5, Centre for Evidence-Based Medicine, Department of Primary Care Health Sciences, , University of Oxford, Radcliffe Observatory Quarter, ; Woodstock Road, Oxford, OX2 6GG UK
                [2 ]ISNI 0000 0004 0425 469X, GRID grid.8991.9, Department of Social and Environmental Health Research, , London School of Hygiene and Tropical Medicine, ; Keppel Street, London, WC1E 7HT UK
                Author information
                http://orcid.org/0000-0002-5127-4728
                Article
                3172
                10.1186/s13063-019-3172-3
                6374909
                30760328
                3bc27d9e-fc88-4a70-a749-a070f14f3fbe
                © The Author(s). 2019

                Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

                History
                : 25 July 2017
                : 2 January 2019
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100009827, Laura and John Arnold Foundation;
                Award ID: BZR00320
                Award Recipient :
                Categories
                Research
                Custom metadata
                © The Author(s) 2019

                Medicine
                outcomes,misreporting,trials,consort,audit,correction letters
                Medicine
                outcomes, misreporting, trials, consort, audit, correction letters

                Comments

                Comment on this article