3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Design differences and variation in results between randomised trials and non-randomised emulations: meta-analysis of RCT-DUPLICATE data

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Objective

          To explore how design emulation and population differences relate to variation in results between randomised controlled trials (RCT) and non-randomised real world evidence (RWE) studies, based on the RCT-DUPLICATE initiative (Randomised, Controlled Trials Duplicated Using Prospective Longitudinal Insurance Claims: Applying Techniques of Epidemiology).

          Design

          Meta-analysis of RCT-DUPLICATE data.

          Data sources

          Trials included in RCT-DUPLICATE, a demonstration project that emulated 32 randomised controlled trials using three real world data sources: Optum Clinformatics Data Mart, 2004-19; IBM MarketScan, 2003-17; and subsets of Medicare parts A, B, and D, 2009-17.

          Eligibility criteria for selecting studies

          Trials where the primary analysis resulted in a hazard ratio; 29 RCT-RWE study pairs from RCT-DUPLICATE.

          Results

          Differences and variation in effect sizes between the results from randomised controlled trials and real world evidence studies were investigated. Most of the heterogeneity in effect estimates between the RCT-RWE study pairs in this sample could be explained by three emulation differences in the meta-regression model: treatment started in hospital (which does not appear in health insurance claims data), discontinuation of some baseline treatments at randomisation (which would have been an unusual care decision in clinical practice), and delayed onset of drug effects (which would be under-reported in real world clinical practice because of the relatively short persistence of the treatment). Adding the three emulation differences to the meta-regression reduced heterogeneity from 1.9 to almost 1 (absence of heterogeneity).

          Conclusions

          This analysis suggests that a substantial proportion of the observed variation between results from randomised controlled trials and real world evidence studies can be attributed to differences in design emulation.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: found
          • Article: not found

          Quantifying heterogeneity in a meta-analysis.

          The extent of heterogeneity in a meta-analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between-study variance, but interpretation is then specific to a particular treatment effect metric. A test for the existence of heterogeneity exists, but depends on the number of studies in the meta-analysis. We develop measures of the impact of heterogeneity on a meta-analysis, from mathematical criteria, that are independent of the number of studies and the treatment effect metric. We derive and propose three suitable statistics: H is the square root of the chi2 heterogeneity statistic divided by its degrees of freedom; R is the ratio of the standard error of the underlying mean from a random effects meta-analysis to the standard error of a fixed effect meta-analytic estimate, and I2 is a transformation of (H) that describes the proportion of total variation in study estimates that is due to heterogeneity. We discuss interpretation, interval estimates and other properties of these measures and examine them in five example data sets showing different amounts of heterogeneity. We conclude that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity. One or both should be presented in published meta-analyses in preference to the test for heterogeneity. Copyright 2002 John Wiley & Sons, Ltd.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            PSYCHOLOGY. Estimating the reproducibility of psychological science.

            Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The Combination of Estimates from Different Experiments

                Bookmark

                Author and article information

                Journal
                BMJ Med
                BMJ Med
                bmjmed
                bmjmed
                BMJ Medicine
                BMJ Publishing Group (BMA House, Tavistock Square, London, WC1H 9JR )
                2754-0413
                2024
                5 February 2024
                : 3
                : 1
                : e000709
                Affiliations
                [1 ]departmentCenter for Reproducible Science , Ringgold_30987Epidemiology, Biostatistics and Prevention Institute, University of Zurich , Zurich, Switzerland
                [2 ]departmentDivision of Pharmacoepidemiology , Brigham and Womems Hospital Harvard Medical School , Boston, Massachusetts, USA
                Author notes
                [Correspondence to ] Dr Rachel Heyard, Center for Reproducible Science, Epidemiology, Biostatistics and Prevention Institute, University of Zurich, Hirschengraben 84, 8001 Zurich, Switzerland; rachel.heyard@ 123456uzh.ch
                Author information
                http://orcid.org/0000-0002-7531-4333
                http://orcid.org/0000-0002-8686-5325
                http://orcid.org/0000-0003-2575-467X
                http://orcid.org/0000-0001-7761-7090
                Article
                bmjmed-2023-000709
                10.1136/bmjmed-2023-000709
                10860009
                38348308
                384fe04f-8ed3-4d2a-80a8-0302a3825ad8
                © Author(s) (or their employer(s)) 2024. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.

                This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/.

                History
                : 17 July 2023
                : 27 December 2023
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100000038, U.S. Food and Drug Administration;
                Award ID: HHSF223201710186C
                Award ID: HHSF223201810146C
                Funded by: FundRef http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: R01AG053302
                Award ID: R01AR080194
                Award ID: RO1HL141505
                Categories
                Original Research
                1506
                Custom metadata
                unlocked

                clinical trial,research design,statistics
                clinical trial, research design, statistics

                Comments

                Comment on this article