5
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The small impact of p-hacking marginally significant results on the meta-analytic estimation of effect size Translated title: El pequeño impacto del "haqueo" de resultados marginalmente significativos sobre la estimación meta-analítica del tamaño del efecto

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Abstract The label p-hacking (pH) refers to a set of opportunistic practices aimed at making statistically significant p values that should be non-significant. Some have argued that we should prevent and fight pH for several reasons, especially because of its potential harmful effects on the assessment of both primary research results and their meta-analytical synthesis. We focus here on the effect of a specific type of pH, focused on marginally significant studies, on the combined estimation of effect size in meta-analysis. We want to know how much we should be concerned with its biasing effect when assessing the results of a meta-analysis. We have calculated the bias in a range of situations that seem realistic in terms of the prevalence and the operational definition of pH. The results show that in most of the situations analyzed the bias is less than one hundredth (± 0.01), in terms of d or r. To reach a level of bias of five-hundredths (± 0.05), there would have to be a massive presence of this type of pH, which seems rather unrealistic. We must continue to fight pH for many good reasons, but our main conclusion is that among them is not that it has a big impact on the meta-analytical estimation of effect size.

          Translated abstract

          Resumen La etiqueta p-hacking (pH) se refiere a un conjunto de prácticas oportunistas destinadas a hacer que sean significativos algunos valores p que deberían ser no significativos. Algunos han argumentado que debemos prevenir y luchar contra el pH por varias razones, especialmente debido a sus posibles efectos nocivos en la evaluación de los resultados de la investigación primaria y su síntesis meta-analítica. Nos focalizamos aquí en el efecto de un tipo específico de pH, centrado en estudios marginalmente significativos, en la estimación combinada del tamaño del efecto en el meta-análisis. Queremos saber cuánto deberíamos preocuparnos por su efecto de sesgo al evaluar los resultados de un meta-análisis. Hemos calculado el sesgo en una variedad de situaciones que parecen realistas en términos de prevalencia y de la definición operativa del pH. Los resultados muestran que en la mayoría de las situaciones analizadas el sesgo es inferior a una centésima (± 0.01), en términos de d o r. Para alcanzar un nivel de sesgo de cinco centésimas (± 0.05), tendría que haber una presencia masiva de este tipo de pH, lo que parece poco realista. Hay muchas buenas razones para luchar contra el pH, pero nuestra conclusión principal es que entre esas razones no se incluye que tenga un gran impacto en la estimación meta-analítica del tamaño del efecto.

          Related collections

          Most cited references54

          • Record: found
          • Abstract: found
          • Article: not found

          A basic introduction to fixed-effect and random-effects models for meta-analysis.

          There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics. In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models. Copyright © 2010 John Wiley & Sons, Ltd. Copyright © 2010 John Wiley & Sons, Ltd.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant.

            In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              One Hundred Years of Social Psychology Quantitatively Described.

                Bookmark

                Author and article information

                Journal
                ap
                Anales de Psicología
                Anal. Psicol.
                Universidad de Murcia (Murcia, Murcia, Spain )
                0212-9728
                1695-2294
                April 2021
                : 37
                : 1
                : 178-187
                Affiliations
                [2] Madrid orgnameUniversidad a Distancia de Madrid Spain
                [3] Murcia orgnameUniversidad de Murcia Spain
                [1] Madrid orgnameUniversidad Autónoma de Madrid Spain
                Article
                S0212-97282021000100020 S0212-9728(21)03700100020
                10.6018/analesps.37.1.433051
                55740679-4925-447a-9ceb-c2ad402bef2c

                This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

                History
                : 18 June 2020
                : 31 August 2020
                Page count
                Figures: 0, Tables: 0, Equations: 0, References: 55, Pages: 10
                Product

                SciELO Spain

                Categories
                Methodology

                p-hacking,tamaño del efecto,meta-análisis,effect size,meta-analysis

                Comments

                Comment on this article