Blog
About

18
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Using evidence from different sources: an example using paracetamol 1000 mg plus codeine 60 mg

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Meta-analysis usually restricts the information pooled, for instance using only randomised, double-blind, placebo-controlled trials. This neglects other types of high quality information. This review explores using different information for the combination of paracetamol 1000 mg and codeine 60 mg in acute postoperative pain.

          Results

          Randomised, double-blind, placebo-controlled trials of paracetamol 1000 mg and codeine 60 mg had an NNT of 2.2 (95% confidence interval 1.7 to 2.9) for at least 50% pain relief over four to six hours in three trials with 197 patients. Computer simulation of randomised trials demonstrated 92% confidence that the simulated NNT was within ± 0.5 of the underlying value of 2.2 with this number of patients. The result was supported a rational dose-response relationship for different doses of paracetamol and codeine in 17 additional trials with 1,195 patients. Three controlled trials lacking a placebo and with 117 patients treated with of paracetamol 1000 mg and codeine 60 mg had 73% (95%CI 56% to 81%) of patients with at least 50% pain relief, compared with 57% (48% to 66%) in placebo controlled trials. Six trials in acute pain were omitted because of design issues, like the use of different pain measures or multiple dosing regimens. In each paracetamol 1000 mg and codeine 60 mg was shown to be better than placebo or comparators for at least one measure.

          Conclusions

          Different designs of high quality trials can be used to support limited information used in meta-analysis without recourse to low quality trials that might be biased.

          Related collections

          Most cited references 56

          • Record: found
          • Abstract: found
          • Article: not found

          Assessing the quality of reports of randomized clinical trials: is blinding necessary?

          It has been suggested that the quality of clinical trials should be assessed by blinded raters to limit the risk of introducing bias into meta-analyses and systematic reviews, and into the peer-review process. There is very little evidence in the literature to substantiate this. This study describes the development of an instrument to assess the quality of reports of randomized clinical trials (RCTs) in pain research and its use to determine the effect of rater blinding on the assessments of quality. A multidisciplinary panel of six judges produced an initial version of the instrument. Fourteen raters from three different backgrounds assessed the quality of 36 research reports in pain research, selected from three different samples. Seven were allocated randomly to perform the assessments under blind conditions. The final version of the instrument included three items. These items were scored consistently by all the raters regardless of background and could discriminate between reports from the different samples. Blind assessments produced significantly lower and more consistent scores than open assessments. The implications of this finding for systematic reviews, meta-analytic research and the peer-review process are discussed.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses?

            Few meta-analyses of randomised trials assess the quality of the studies included. Yet there is increasing evidence that trial quality can affect estimates of intervention efficacy. We investigated whether different methods of quality assessment provide different estimates of intervention efficacy evaluated in randomised controlled trials (RCTs). We randomly selected 11 meta-analyses that involved 127 RCTs on the efficacy of interventions used for circulatory and digestive diseases, mental health, and pregnancy and childbirth. We replicated all the meta-analyses using published data from the primary studies. The quality of reporting of all 127 clinical trials was assessed by means of component and scale approaches. To explore the effects of quality on the quantitative results, we examined the effects of different methods of incorporating quality scores (sensitivity analysis and quality weights) on the results of the meta-analyses. The quality of trials was low. Masked assessments provided significantly higher scores than unmasked assessments (mean 2.74 [SD 1.10] vs 2.55 [1.20]). Low-quality trials (score 2), were associated with an increased estimate of benefit of 34% (ratio of odds ratios [ROR] 0.66 [95% CI 0.52-0.83]). Trials that used inadequate allocation concealment, compared with those that used adequate methods, were also associated with an increased estimate of benefit (37%; ROR=0.63 [0.45-0.88]). The average treatment benefit was 39% (odds ratio [OR] 0.61 [0.57-0.65]) for all trials, 52% (OR 0.48 [0.43-0.54]) for low-quality trials, and 29% (OR 0.71 [0.65-0.77]) for high-quality trials. Use of all the trial scores as quality weights reduced the effects to 35% (OR 0.65 [0.59-0.71]) and resulted in the least statistical heterogeneity. Studies of low methodological quality in which the estimate of quality is incorporated into the meta-analyses can alter the interpretation of the benefit of intervention, whether a scale or component approach is used in the assessment of trial quality.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Randomized, controlled trials, observational studies, and the hierarchy of research designs.

              In the hierarchy of research designs, the results of randomized, controlled trials are considered to be evidence of the highest grade, whereas observational studies are viewed as having less validity because they reportedly overestimate treatment effects. We used published meta-analyses to identify randomized clinical trials and observational studies that examined the same clinical topics. We then compared the results of the original reports according to the type of research design. A search of the Medline data base for articles published in five major medical journals from 1991 to 1995 identified meta-analyses of randomized, controlled trials and meta-analyses of either cohort or case-control studies that assessed the same intervention. For each of five topics, summary estimates and 95 percent confidence intervals were calculated on the basis of data from the individual randomized, controlled trials and the individual observational studies. For the five clinical topics and 99 reports evaluated, the average results of the observational studies were remarkably similar to those of the randomized, controlled trials. For example, analysis of 13 randomized, controlled trials of the effectiveness of bacille Calmette-Guérin vaccine in preventing active tuberculosis yielded a relative risk of 0.49 (95 percent confidence interval, 0.34 to 0.70) among vaccinated patients, as compared with an odds ratio of 0.50 (95 percent confidence interval, 0.39 to 0.65) from 10 case-control studies. In addition, the range of the point estimates for the effect of vaccination was wider for the randomized, controlled trials (0.20 to 1.56) than for the observational studies (0.17 to 0.84). The results of well-designed observational studies (with either a cohort or a case-control design) do not systematically overestimate the magnitude of the effects of treatment as compared with those in randomized, controlled trials on the same topic.
                Bookmark

                Author and article information

                Journal
                BMC Med Res Methodol
                BMC Medical Research Methodology
                BioMed Central (London )
                1471-2288
                2001
                10 January 2001
                : 1
                : 1
                Affiliations
                [1 ] Pain Research & Nuffield Department of Anaesthetics, University of Oxford, Oxford, UK
                [2 ] Oxford University Computing Laboratory, Oxford, UK
                Article
                1471-2288-1-1
                10.1186/1471-2288-1-1
                32200
                11231885
                Copyright © 2001 Smith et al; licensee BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL.
                Categories
                Research Article

                Medicine

                Comments

                Comment on this article