66
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Empirical Evidence of Study Design Biases in Randomized Trials: Systematic Review of Meta-Epidemiological Studies

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objective

          To synthesise evidence on the average bias and heterogeneity associated with reported methodological features of randomized trials.

          Design

          Systematic review of meta-epidemiological studies.

          Methods

          We retrieved eligible studies included in a recent AHRQ-EPC review on this topic (latest search September 2012), and searched Ovid MEDLINE and Ovid EMBASE for studies indexed from Jan 2012-May 2015. Data were extracted by one author and verified by another. We combined estimates of average bias (e.g. ratio of odds ratios (ROR) or difference in standardised mean differences (dSMD)) in meta-analyses using the random-effects model. Analyses were stratified by type of outcome (“mortality” versus “other objective” versus “subjective”). Direction of effect was standardised so that ROR < 1 and dSMD < 0 denotes a larger intervention effect estimate in trials with an inadequate or unclear (versus adequate) characteristic.

          Results

          We included 24 studies. The available evidence suggests that intervention effect estimates may be exaggerated in trials with inadequate/unclear (versus adequate) sequence generation (ROR 0.93, 95% CI 0.86 to 0.99; 7 studies) and allocation concealment (ROR 0.90, 95% CI 0.84 to 0.97; 7 studies). For these characteristics, the average bias appeared to be larger in trials of subjective outcomes compared with other objective outcomes. Also, intervention effects for subjective outcomes appear to be exaggerated in trials with lack of/unclear blinding of participants (versus blinding) (dSMD -0.37, 95% CI -0.77 to 0.04; 2 studies), lack of/unclear blinding of outcome assessors (ROR 0.64, 95% CI 0.43 to 0.96; 1 study) and lack of/unclear double blinding (ROR 0.77, 95% CI 0.61 to 0.93; 1 study). The influence of other characteristics (e.g. unblinded trial personnel, attrition) is unclear.

          Conclusions

          Certain characteristics of randomized trials may exaggerate intervention effect estimates. The average bias appears to be greatest in trials of subjective outcomes. More research on several characteristics, particularly attrition and selective reporting, is needed.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: found
          • Article: not found

          Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials.

          To determine if inadequate approaches to randomized controlled trial design and execution are associated with evidence of bias in estimating treatment effects. An observational study in which we assessed the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzed, using multiple logistic regression models, the associations between those assessments and estimated treatment effects. Meta-analyses from the Cochrane Pregnancy and Childbirth Database. The associations between estimates of treatment effects and inadequate allocation concealment, exclusions after randomization, and lack of double-blinding. Compared with trials in which authors reported adequately concealed treatment allocation, trials in which concealment was either inadequate or unclear (did not report or incompletely reported a concealment approach) yielded larger estimates of treatment effects (P < .001). Odds ratios were exaggerated by 41% for inadequately concealed trials and by 30% for unclearly concealed trials (adjusted for other aspects of quality). Trials in which participants had been excluded after randomization did not yield larger estimates of effects, but that lack of association may be due to incomplete reporting. Trials that were not double-blind also yielded larger estimates of effects (P = .01), with odds ratios being exaggerated by 17%. This study provides empirical evidence that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias. Readers of trial reports should be wary of these pitfalls, and investigators must improve their design, execution, and reporting of trials.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Impact of allocation concealment on conclusions drawn from meta-analyses of randomized trials.

            Randomized trials without reported adequate allocation concealment have been shown to overestimate the benefit of experimental interventions. We investigated the robustness of conclusions drawn from meta-analyses to exclusion of such trials. Random sample of 38 reviews from The Cochrane Library 2003, issue 2 and 32 other reviews from PubMed accessed in 2002. Eligible reviews presented a binary effect estimate from a meta-analysis of randomized controlled trials as the first statistically significant result that supported a conclusion in favour of one of the interventions. We assessed the methods sections of the trials in each included meta-analysis for adequacy of allocation concealment. We replicated each meta-analysis using the authors' methods but included only trials that had adequate allocation concealment. Conclusions were defined as not supported if our result was not statistically significant. Thirty-four of the 70 meta-analyses contained a mixture of trials with unclear or inadequate concealment as well as trials with adequate allocation concealment. Four meta-analyses only contained trials with adequate concealment, and 32, only trials with unclear or inadequate concealment. When only trials with adequate concealment were included, 48 of 70 conclusions (69%; 95% confidence interval: 56-79%) lost support. The loss of support mainly reflected loss of power (the total number of patients was reduced by 49%) but also a shift in the point estimate towards a less beneficial effect. Two-thirds of conclusions in favour of one of the interventions were no longer supported if only trials with adequate allocation concealment were included.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors.

              Clinical trials are commonly done without blinded outcome assessors despite the risk of bias. We wanted to evaluate the effect of nonblinded outcome assessment on estimated effects in randomized clinical trials with outcomes that involved subjective measurement scales. We conducted a systematic review of randomized clinical trials with both blinded and nonblinded assessment of the same measurement scale outcome. We searched PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press and Google Scholar for relevant studies. Two investigators agreed on the inclusion of trials and the outcome scale. For each trial, we calculated the difference in effect size (i.e., standardized mean difference between nonblinded and blinded assessments). A difference in effect size of less than 0 suggested that nonblinded assessors generated more optimistic estimates of effect. We pooled the differences in effect size using inverse variance random-effects meta-analysis and used metaregression to identify potential reasons for variation. We included 24 trials in our review. The main meta-analysis included 16 trials (involving 2854 patients) with subjective outcomes. The estimated treatment effect was more beneficial when based on nonblinded assessors (pooled difference in effect size -0.23 [95% confidence interval (CI) -0.40 to -0.06]). In relative terms, nonblinded assessors exaggerated the pooled effect size by 68% (95% CI 14% to 230%). Heterogeneity was moderate (I(2) = 46%, p = 0.02) and unexplained by metaregression. We provide empirical evidence for observer bias in randomized clinical trials with subjective measurement scale outcomes. A failure to blind assessors of outcomes in such trials results in a high risk of substantial bias.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, CA USA )
                1932-6203
                11 July 2016
                2016
                : 11
                : 7
                : e0159267
                Affiliations
                [1 ]School of Social and Community Medicine, University of Bristol, Bristol, United Kingdom
                [2 ]School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
                [3 ]Center for Evidence-Based Medicine, University of Southern Denmark & Odense University Hospital, Odense, Denmark
                [4 ]The National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care West (NIHR CLAHRC West) at University Hospitals Bristol NHS Foundation Trust, Bristol, United Kingdom
                Johns Hopkins Bloomberg School of Public Health, UNITED STATES
                Author notes

                Competing Interests: JACS, AB and JS are authors of a study included in this review, but were not involved in the eligibility assessment or data extraction of these studies. This does not alter the authors' adherence to PLOS ONE policies on sharing data and materials. All other authors declare no competing interests.

                Conceived and designed the experiments: MJP JPTH JS. Performed the experiments: MJP GC. Analyzed the data: MJP JPTH. Wrote the paper: MJP JPTH GC JACS AH JS.

                Article
                PONE-D-16-13736
                10.1371/journal.pone.0159267
                4939945
                27398997
                3c9686cf-93eb-44d2-81a2-62597212ad8e
                © 2016 Page et al

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 5 April 2016
                : 29 June 2016
                Page count
                Figures: 11, Tables: 3, Pages: 26
                Funding
                Funded by: MRC Network of Hubs for Trials Methodology Research
                Award ID: MR/L004933/1- N61
                Award Recipient :
                Funded by: funder-id http://dx.doi.org/10.13039/501100000925, National Health and Medical Research Council;
                Award ID: 1088535
                Award Recipient :
                Funded by: National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care West (NIHR CLAHRC West)
                Award Recipient :
                This work was supported by the MRC Network of Hubs for Trials Methodology Research (MR/L004933/1- N61). MJP is supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535). JS is supported by a National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care West (NIHR CLAHRC West). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
                Categories
                Research Article
                Research and Analysis Methods
                Mathematical and Statistical Techniques
                Statistical Methods
                Meta-Analysis
                Physical Sciences
                Mathematics
                Statistics (Mathematics)
                Statistical Methods
                Meta-Analysis
                Biology and Life Sciences
                Molecular Biology
                Molecular Biology Techniques
                Sequencing Techniques
                Sequence Analysis
                Research and Analysis Methods
                Molecular Biology Techniques
                Sequencing Techniques
                Sequence Analysis
                Research and Analysis Methods
                Research Assessment
                Systematic Reviews
                Research and Analysis Methods
                Database and Informatics Methods
                Database Searching
                Medicine and Health Sciences
                Pharmaceutics
                Drug Therapy
                Drug Administration
                Engineering and Technology
                Equipment
                Measurement Equipment
                Research and Analysis Methods
                Research Assessment
                Research Validity
                Physical Sciences
                Mathematics
                Statistics (Mathematics)
                Confidence Intervals
                Custom metadata
                All relevant data are within the paper and its Supporting Information files.

                Uncategorized
                Uncategorized

                Comments

                Comment on this article