13
views
1
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems. While faculty reports are useful, information is lacking on how often and in what ways the JIF is currently used for review, promotion, and tenure (RPT). We therefore collected and analyzed RPT documents from a representative sample of 129 universities from the United States and Canada and 381 of their academic units. We found that 40% of doctoral, research-intensive (R-type) institutions and 18% of master’s, or comprehensive (M-type) institutions explicitly mentioned the JIF, or closely related terms, in their RPT documents. Undergraduate, or baccalaureate (B-type) institutions did not mention it at all. A detailed reading of these documents suggests that institutions may also be using a variety of terms to indirectly refer to the JIF. Our qualitative analysis shows that 87% of the institutions that mentioned the JIF supported the metric’s use in at least one of their RPT documents, while 13% of institutions expressed caution about the JIF’s use in evaluations. None of the RPT documents we analyzed heavily criticized the JIF or prohibited its use in evaluations. Of the institutions that mentioned the JIF, 63% associated it with quality, 40% with impact, importance, or significance, and 20% with prestige, reputation, or status. In sum, our results show that the use of the JIF is encouraged in RPT evaluations, especially at research-intensive universities, and indicates there is work to be done to improve evaluation processes to avoid the potential misuse of metrics like the JIF.

          Related collections

          Most cited references24

          • Record: found
          • Abstract: not found
          • Article: not found

          The history and meaning of the journal impact factor.

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature

            We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

              The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings.
                Bookmark

                Author and article information

                Journal
                PeerJ
                April 07 2019
                Affiliations
                [1 ]Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Cuidad de México, CDMX, Mexico
                [2 ]Scholarly Communications Lab, Simon Fraser University, Vancouver, British Columbia, Canada
                [3 ]John F. Kennedy Institute, Freie Universität Berlin, Berlin, Brandenburg, Germany
                [4 ]Department of Nutrition and Food Sciences, Food Systems Program, University of Vermont, Burlington, Vermont, United States
                [5 ]School of Publishing, Simon Fraser University, Vancouver, British Columbia, Canada
                Article
                10.7287/peerj.preprints.27638v1
                f1acdc77-0db0-4cf9-90fa-dbb21fa287c0
                © 2019

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article