28
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      No short-cut in assessing trial quality: a case study

      research-article
      1 ,
      Trials
      BioMed Central

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Assessing the quality of included trials is a central part of a systematic review. Many check-list type of instruments for doing this exist. Using a trial of antibiotic treatment for acute otitis media, Burke et al., BMJ, 1991, as the case study, this paper illustrates some limitations of the check-list approach to trial quality assessment.

          Results

          The general verdict from the check list type evaluations in nine relevant systematic reviews was that Burke et al. (1991) is a good quality trial. All relevant meta-analyses extensively used its data to formulate therapeutic evidence. My comprehensive evaluation, on the other hand, brought to the surface a series of serious problems in the design, conduct, analysis and report of this trial that were missed by the earlier evaluations.

          Conclusion

          A check-list or instrument based approach, if used as a short-cut, may at times rate deeply flawed trials as good quality trials. Check lists are crucial but they need to be augmented with an in-depth review, and where possible, a scrutiny of the protocol, trial records, and original data. The extent and severity of the problems I uncovered for this particular trial warrant an independent audit before it is included in a systematic review.

          Related collections

          Most cited references51

          • Record: found
          • Abstract: found
          • Article: not found

          Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials.

          To determine if inadequate approaches to randomized controlled trial design and execution are associated with evidence of bias in estimating treatment effects. An observational study in which we assessed the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzed, using multiple logistic regression models, the associations between those assessments and estimated treatment effects. Meta-analyses from the Cochrane Pregnancy and Childbirth Database. The associations between estimates of treatment effects and inadequate allocation concealment, exclusions after randomization, and lack of double-blinding. Compared with trials in which authors reported adequately concealed treatment allocation, trials in which concealment was either inadequate or unclear (did not report or incompletely reported a concealment approach) yielded larger estimates of treatment effects (P < .001). Odds ratios were exaggerated by 41% for inadequately concealed trials and by 30% for unclearly concealed trials (adjusted for other aspects of quality). Trials in which participants had been excluded after randomization did not yield larger estimates of effects, but that lack of association may be due to incomplete reporting. Trials that were not double-blind also yielded larger estimates of effects (P = .01), with odds ratios being exaggerated by 17%. This study provides empirical evidence that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias. Readers of trial reports should be wary of these pitfalls, and investigators must improve their design, execution, and reporting of trials.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Systematic reviews in health care: Assessing the quality of controlled clinical trials.

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The hazards of scoring the quality of clinical trials for meta-analysis.

              Although it is widely recommended that clinical trials undergo some type of quality review, the number and variety of quality assessment scales that exist make it unclear how to achieve the best assessment. To determine whether the type of quality assessment scale used affects the conclusions of meta-analytic studies. Meta-analysis of 17 trials comparing low-molecular-weight heparin (LMWH) with standard heparin for prevention of postoperative thrombosis using 25 different scales to identify high-quality trials. The association between treatment effect and summary scores and the association with 3 key domains (concealment of treatment allocation, blinding of outcome assessment, and handling of withdrawals) were examined in regression models. Pooled relative risks of deep vein thrombosis with LMWH vs standard heparin in high-quality vs low-quality trials as determined by 25 quality scales. Pooled relative risks from high-quality trials ranged from 0.63 (95% confidence interval [CI], 0.44-0.90) to 0.90 (95% CI, 0.67-1.21) vs 0.52 (95% CI, 0.24-1.09) to 1.13 (95% CI, 0.70-1.82) for low-quality trials. For 6 scales, relative risks of high-quality trials were close to unity, indicating that LMWH was not significantly superior to standard heparin, whereas low-quality trials showed better protection with LMWH (P<.05). Seven scales showed the opposite: high quality trials showed an effect whereas low quality trials did not. For the remaining 12 scales, effect estimates were similar in the 2 quality strata. In regression analysis, summary quality scores were not significantly associated with treatment effects. There was no significant association of treatment effects with allocation concealment and handling of withdrawals. Open outcome assessment, however, influenced effect size with the effect of LMWH, on average, being exaggerated by 35% (95% CI, 1%-57%; P= .046). Our data indicate that the use of summary scores to identify trials of high quality is problematic. Relevant methodological aspects should be assessed individually and their influence on effect sizes explored.
                Bookmark

                Author and article information

                Journal
                Trials
                Trials
                BioMed Central
                1745-6215
                2009
                7 January 2009
                : 10
                : 1
                Affiliations
                [1 ]Department of Epidemiology and Biostatistics, Muhimbili University of Health and Allied Sciences, P. O. Box 65015, Dar es Salaam, Tanzania
                Article
                1745-6215-10-1
                10.1186/1745-6215-10-1
                2636799
                19128475
                e8c99756-c1ab-41d6-9017-0e5dc2882780
                Copyright © 2009 Hirji; licensee BioMed Central Ltd.

                This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 7 September 2007
                : 7 January 2009
                Categories
                Methodology

                Medicine
                Medicine

                Comments

                Comment on this article