12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Testing for funnel plot asymmetry of standardized mean differences

      1 , 1
      Research Synthesis Methods
      Wiley

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Publication bias and other forms of outcome reporting bias are critical threats to the validity of findings from research syntheses. A variety of methods have been proposed for detecting selective outcome reporting in a collection of effect size estimates, including several methods based on assessment of asymmetry of funnel plots, such as the Egger's regression test, the rank correlation test, and the Trim-and-Fill test. Previous research has demonstated that the Egger's regression test is miscalibrated when applied to log-odds ratio effect size estimates, because of artifactual correlation between the effect size estimate and its standard error. This study examines similar problems that occur in meta-analyses of the standardized mean difference, a ubiquitous effect size measure in educational and psychological research. In a simulation study of standardized mean difference effect sizes, we assess the Type I error rates of conventional tests of funnel plot asymmetry, as well as the likelihood ratio test from a three-parameter selection model. Results demonstrate that the conventional tests have inflated Type I error due to the correlation between the effect size estimate and its standard error, while tests based on either a simple modification to the conventional standard error formula or a variance-stabilizing transformation both maintain close-to-nominal Type I error.

          Related collections

          Most cited references34

          • Record: found
          • Abstract: found
          • Article: not found

          Improved tests for a random effects meta-regression with a single covariate.

          The explanation of heterogeneity plays an important role in meta-analysis. The random effects meta-regression model allows the inclusion of trial-specific covariates which may explain a part of the heterogeneity. We examine the commonly used tests on the parameters in the random effects meta-regression with one covariate and propose some new test statistics based on an improved estimator of the variance of the parameter estimates. The approximation of the distribution of the newly proposed tests is based on some theoretical considerations. Moreover, the newly proposed tests can easily be extended to the case of more than one covariate. In a simulation study, we compare the tests with regard to their actual significance level and we consider the log relative risk as the parameter of interest. Our simulation study reflects the meta-analysis of the efficacy of a vaccine for the prevention of tuberculosis originally discussed in Berkey et al. The simulation study shows that the newly proposed tests are superior to the commonly used test in holding the nominal significance level. Copyright 2003 John Wiley & Sons, Ltd.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results.

            Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the "choice overload" literature.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Effect sizes for growth-modeling analysis for controlled clinical trials in the same metric as for classical analysis.

              The use of growth-modeling analysis (GMA)--including hierarchical linear models, latent growth models, and general estimating equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the intervention and control groups that captures the treatment effect is rarely reported. This article first reviews 2 classes of formulas for effect sizes associated with classical repeated-measures designs that use the standard deviation of either change scores or raw scores for the denominator. It then broadens the scope to subsume GMA and demonstrates that the independent groups, within-subjects, pretest-posttest control-group, and GMA designs all estimate the same effect size when the standard deviation of raw scores is uniformly used. Finally, the article shows that the correct effect size for treatment efficacy in GMA--the difference between the estimated means of the 2 groups at end of study (determined from the coefficient for the slope difference and length of study) divided by the baseline standard deviation--is not reported in clinical trials.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                Journal
                Research Synthesis Methods
                Res Syn Meth
                Wiley
                1759-2879
                1759-2887
                October 21 2018
                March 2019
                January 08 2019
                March 2019
                : 10
                : 1
                : 57-71
                Affiliations
                [1 ]Educational Psychology DepartmentThe University of Texas at Austin Austin Texas
                Article
                10.1002/jrsm.1332
                30506832
                d667d9fa-0b43-4a7a-ac9d-b3200c12e6f3
                © 2019

                http://onlinelibrary.wiley.com/termsAndConditions#vor

                http://doi.wiley.com/10.1002/tdm_license_1.1

                History

                Comments

                Comment on this article