Blog
About

616
views
0
recommends
+1 Recommend
1 collections
    8
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep Impact: Unintended consequences of journal rank

      Preprint

        ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Most researchers acknowledge an intrinsic hierarchy in the scholarly journals ('journal rank') that they submit their work to, and adjust not only their submission but also their reading strategies accordingly. On the other hand, much has been written about the negative effects of institutionalizing journal rank as an impact measure. So far, contributions to the debate concerning the limitations of journal rank as a scientific impact assessment tool have either lacked data, or relied on only a few studies. In this review, we present the most recent and pertinent data on the consequences of our current scholarly communication system with respect to various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings or retractions). These data corroborate previous hypotheses: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery functions of the current journal system.

          Related collections

          Most cited references 63

          • Record: found
          • Abstract: found
          • Article: not found

          Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias

          Background The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention. Methodology/Principal Findings We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. Conclusions Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The skewness of science

             Per Seglen (1992)
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Why the impact factor of journals should not be used for evaluating research

               P O Seglen (1997)
                Bookmark

                Author and article information

                Journal
                2013-01-16
                2013-05-10
                Article
                1301.3748

                http://creativecommons.org/licenses/by/3.0/

                Custom metadata
                cs.DL physics.soc-ph stat.OT

                General physics, Information & Library science, General statistics

                Comments

                Comment on this article