Blog
About

2,505
views
3
recommends
+1 Recommend
6 collections
    31
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep impact: unintended consequences of journal rank

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Most researchers acknowledge an intrinsic hierarchy in the scholarly journals (“journal rank”) that they submit their work to, and adjust not only their submission but also their reading strategies accordingly. On the other hand, much has been written about the negative effects of institutionalizing journal rank as an impact measure. So far, contributions to the debate concerning the limitations of journal rank as a scientific impact assessment tool have either lacked data, or relied on only a few studies. In this review, we present the most recent and pertinent data on the consequences of our current scholarly communication system with respect to various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings or retractions). These data corroborate previous hypotheses: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. This new system will use modern information technology to vastly improve the filter, sort and discovery functions of the current journal system.

          Related collections

          Most cited references 127

          • Record: found
          • Abstract: found
          • Article: not found

          Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias

          Background The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention. Methodology/Principal Findings We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. Conclusions Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Why the impact factor of journals should not be used for evaluating research.

             Per Seglen (1997)
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The skewness of science

               Per Seglen (1992)
                Bookmark

                Author and article information

                Journal
                Front Hum Neurosci
                Front Hum Neurosci
                Front. Hum. Neurosci.
                Frontiers in Human Neuroscience
                Frontiers Media S.A.
                1662-5161
                24 June 2013
                2013
                : 7
                Affiliations
                1Institute of Zoology—Neurogenetics, University of Regensburg Regensburg, Germany
                2School of Social and Community Medicine, University of Bristol Bristol, UK
                3UK Centre for Tobacco Control Studies and School of Experimental Psychology, University of Bristol Bristol, UK
                Author notes

                Edited by: Hauke R. Heekeren, Freie Universität Berlin, Germany

                Reviewed by: Nikolaus Kriegeskorte, Medical Research Council, UK; Erin J. Wamsley, Harvard Medical School, USA

                *Correspondence: Björn Brembs, Institute of Zoology—Neurogenetics, University of Regensburg, Universitätsstr. 31, 93040 Regensburg, Bavaria, Germany e-mail: bjoern@ 123456brembs.net
                Article
                10.3389/fnhum.2013.00291
                3690355
                23805088
                Copyright © 2013 Brembs, Button and Munafò.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.

                Page count
                Figures: 4, Tables: 1, Equations: 0, References: 148, Pages: 12, Words: 10671
                Categories
                Neuroscience
                Review Article

                Comments

                I do not understand why the main stream journals do not implement the open-review feature. Technically, it is trivial!

                2015-06-12 07:50 UTC
                +1
                3 people recommend this

                I believe that the reason is quite obvious: Publishers have to justify why they are used to charge at average more than about 5,000 USD per article (in the scientific, medical, technical (STM) disciplines) for toll-access journals or sometime even more for Open Access (OA) article processing charges. If we consider the technical costs for typesetting, copyediting, XML conversion etc with no more than 100-150 USD per article of an average length in STM, there remains a huge span in terms of publication charges. Usually publishers claim that these costs are associated with the internal 'moderation of the peer-review process'. If they introduce an open (post-publication) peer reviewing system, this justification (if any) will be gone. This is why they don't like these features I am afraid.
                And even more amazingly, for the majority of the 20,000+ scholarly journals in STM there isn't any in-house editorial office at the publishers' site rather than external scholars (at universities or institutions) have to invite reviewers themselves as E-i-C or managing editors (for no financial compensation or a tiny honorary). So could somebody tell me please for what publishers charge some thousands USD per article there...?

                2015-06-12 11:09 UTC

                It appears to be amazing that today both individual scholars and research of institutions is evaluated on the basis of the Impact Factor (IF). That metrics had been introduced some decades ago for a completely different reason and had been adopted by the academic community as a tool for evaluation. A poor tool, as also those researchers have already recognized which understood the way how the IF is calculated. Based on the somewhat frightening oberservation that a majority of scholars continue to supporting an inappropriate system by submitting their manuscripts to high-IF journals preferentially, the authors demonstrate why this behaviour seems to not proved by any evidence yet.

                Based on the findings of that study one of the authors, Bjoern Brembs, recently concluded in a Tweet (@brembs 6/12/15) that the present system is suffering from the observation that anybody is going to publish in those journals where the worst science can be found.

                2015-06-12 07:22 UTC
                +1
                One person recommends this

                Comment on this article