+1 Recommend
6 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The academic, economic and societal impacts of Open Access: an evidence-based review


      Read this article at



          At an ‘Open Science Meetup‘ in Berlin, held in the basement bar of a local brewery, one of the attendees asked us all a simple hypothetical question: ‘If I was a research funder going through our budget and saw that we were spending millions on Open Access (OA), how would you justify that expense to me?’ You might think people attending an Open Science meetup would have all the answers to this. Heck, even I thought I should be able to answer that! But we actually failed. Each of us made our individual cases, and we failed. Our answers were incoherent and lacking in convincing evidence. I couldn’t even convince myself at the time, and would like to think Open Access is one of the things I know a bit about.

          And when you think about it, it’s a very good question. I bet every one of you could give me an answer to this off the top of your heads. Something about OA increasing citations, or being good for copyright, or because access to knowledge is a human right, or how it saves money, or because your funder told you to, or whatever. But how do you make a convincing, comprehensive, evidence-informed answer to this question? It’s difficult.

          So I took the same question to OpenCon in Brussels, an event for students and early career researchers to spearhead the ‘open movement’. Everyone I asked failed there too. I pretended to be an impartial funder, and asked the OA advocates present to justify OA to me. And we all failed. Small chunks of useful evidence, but overall incoherent, and underwhelming. This little experiment told me that there’s too much varied information, and too few of us who are well-informed enough, to have consistently reasonable, progressive, and evidence-based discussions around OA. I want people to be as well informed as possible about the issues, so that we can have reasonable policies, rational discussions, civil debate, and progressive decision making in the scholarly publishing ecosystem.

          So how do you fix this? The world of ‘open’ is complex and multi-dimensional, with evidence mixing with anecdote, fear combating against data, and misinformation blending with reality. If OA was a simple issue, we’d have resolved it already. The fact remains that the global research community remains largely disorganised with respect to OA, as some like Richard Poynder have pointed out recently*. The reasons for this are likely as multi-faceted as the problem itself, but one point in this sticks out: OA ‘advocates’ need to take responsibility for the ‘open movement.’ I wrote a bit about this and accountability in a previous post here, and this is very much a related issue. But part of solving this issue entails equipping ourselves with sufficient knowledge to make the case for open.

          So we wrote a paper. I started it after OpenCon, and put out a public call for contributors through my social channels. Anyone could join at any point. Initially it was just a Google Doc where people could contribute sources, but then I shifted it to Overleaf, a public collaboration platform that uses latex and a version control system to seamlessly integrate contributions from multiple authors at once. So it was an entirely open process, and a cadre of awesome people joined me. Mostly PhD students, but also a librarian! Each contributed their own perspectives, and watching the paper organically evolve was a magical experience.

          I set out to ‘make the case for open’, and it ended up being a multi-dimensional critical review with contributions from around the world. We ended up discussing copyright law, issues with OA in the ‘Global South’, innovating beyond traditional publishing models, the cost of OA, and the need for OA in fueling society, the global economy, and research. People offered comments on Twitter, via email, and on annotations on PDF versions of the article as it was being written. The process was open and dynamic, and it totally rocked!

          And we ended up with something I hope you all think is pretty awesome, and which I hope will become a valuable resource for all involved in OA discussions. It was published with F1000 Research, with the submission taking about all of 5 minutes with Overleaf’s integration. It was accepted after about 2 days with a light copy edit, and published after being typeset about 9 days after submission. And as part of the Future of Scholarly Publishing channel, it was free too! (Why isn’t this the normal process for publishing, again?)

          At the moment it’s awaiting formal peer review (F1000 Research uses a post-publication system, designed for open and rapid research communication. Again, awesome.). In the mean time, commenting is strongly encouraged! We’ve already been ‘Mounced’ in the comments, and I’d love more feedback. There are 5 referees looking at it already formally (yikes..), but that’s no reason why we can’t have everyone’s opinion, thoughts, and expertise influencing this paper. I’ll have to save a breakdown of the key points for another post, as this one is already hella long, but in the mean time we would all love any feedback (positive or negative, irrespective of who you are or who you work for), and if you could share the article with your friends and colleagues that’d be just swell.

          We are stronger as a community if we take the responsibility to equip ourselves with the knowledge required to advocate for change.

          So the final question is, is there a case for Open? You’re damn right there is (citation needed).

          *I don’t agree with Richard on all of this, but he makes some pretty insightful points.



          Ongoing debates surrounding Open Access to the scholarly literature are multifaceted and complicated by disparate and often polarised viewpoints from engaged stakeholders. At the current stage, Open Access has become such a global issue that it is critical for all involved in scholarly publishing, including policymakers, publishers, research funders, governments, learned societies, librarians, and academic communities, to be well-informed on the history, benefits, and pitfalls of Open Access. In spite of this, there is a general lack of consensus regarding the advantages or disadvantages of Open Access at multiple levels. This review aims to to be a resource for current knowledge on the impacts of Open Access by synthesizing important research in three major areas of impact: academic, economic and societal. While there is clearly much scope for additional research, several key trends are identified, including a broad citation advantage for researchers who publish openly, as well as additional benefits to the non-academic dissemination of their work. The economic case for Open Access is less well-understood, although it is clear that access to the research literature is key for innovative enterprises, and a range of governmental and non-governmental services. Furthermore, Open Access has the potential to save publishers and research funders considerable amounts of financial resources. The social case for Open Access is strong, in particular for advancing citizen science initiatives, and leveling the playing field for researchers in developing countries. Open Access supersedes all potential alternative modes of access to the scholarly literature through enabling unrestricted re-use, and long-term stability independent of financial constraints of traditional publishers that impede knowledge sharing. Open Access remains only one of the multiple challenges that the scholarly publishing system is currently facing. Yet, it provides one foundation for increasing engagement with researchers regarding ethical standards of publishing. We recommend that Open Access supporters focus their efforts on working to establish viable new models and systems of scholarly communication, rather than trying to undermine the existing ones as part of the natural evolution of the scholarly ecosystem. Based on this, future research should investigate the wider impacts of an ecosystem-wide transformation to a system of Open Research.

          Related collections

          Most cited references86

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Observation of Gravitational Waves from a Binary Black Hole Merger

          On September 14, 2015 at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory simultaneously observed a transient gravitational-wave signal. The signal sweeps upwards in frequency from 35 to 250 Hz with a peak gravitational-wave strain of \(1.0 \times 10^{-21}\). It matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole. The signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203 000 years, equivalent to a significance greater than 5.1 {\sigma}. The source lies at a luminosity distance of \(410^{+160}_{-180}\) Mpc corresponding to a redshift \(z = 0.09^{+0.03}_{-0.04}\). In the source frame, the initial black hole masses are \(36^{+5}_{-4} M_\odot\) and \(29^{+4}_{-4} M_\odot\), and the final black hole mass is \(62^{+4}_{-4} M_\odot\), with \(3.0^{+0.5}_{-0.5} M_\odot c^2\) radiated in gravitational waves. All uncertainties define 90% credible intervals.These observations demonstrate the existence of binary stellar-mass black hole systems. This is the first direct detection of gravitational waves and the first observation of a binary black hole merger.
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact

            Background Citations in peer-reviewed articles and the impact factor are generally accepted measures of scientific impact. Web 2.0 tools such as Twitter, blogs or social bookmarking tools provide the possibility to construct innovative article-level or journal-level metrics to gauge impact and influence. However, the relationship of the these new metrics to traditional metrics such as citations is not known. Objective (1) To explore the feasibility of measuring social impact of and public attention to scholarly articles by analyzing buzz in social media, (2) to explore the dynamics, content, and timing of tweets relative to the publication of a scholarly article, and (3) to explore whether these metrics are sensitive and specific enough to predict highly cited articles. Methods Between July 2008 and November 2011, all tweets containing links to articles in the Journal of Medical Internet Research (JMIR) were mined. For a subset of 1573 tweets about 55 articles published between issues 3/2009 and 2/2010, different metrics of social media impact were calculated and compared against subsequent citation data from Scopus and Google Scholar 17 to 29 months later. A heuristic to predict the top-cited articles in each issue through tweet metrics was validated. Results A total of 4208 tweets cited 286 distinct JMIR articles. The distribution of tweets over the first 30 days after article publication followed a power law (Zipf, Bradford, or Pareto distribution), with most tweets sent on the day when an article was published (1458/3318, 43.94% of all tweets in a 60-day period) or on the following day (528/3318, 15.9%), followed by a rapid decay. The Pearson correlations between tweetations and citations were moderate and statistically significant, with correlation coefficients ranging from .42 to .72 for the log-transformed Google Scholar citations, but were less clear for Scopus citations and rank correlations. A linear multivariate model with time and tweets as significant predictors (P < .001) could explain 27% of the variation of citations. Highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles (9/12 or 75% of highly tweeted article were highly cited, while only 3/43 or 7% of less-tweeted articles were highly cited; rate ratio 0.75/0.07 = 10.75, 95% confidence interval, 3.4–33.6). Top-cited articles can be predicted from top-tweeted articles with 93% specificity and 75% sensitivity. Conclusions Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact. Social impact measures based on tweets are proposed to complement traditional citation metrics. The proposed twimpact factor may be a useful and timely metric to measure uptake of research findings and to filter research findings resonating with the public in real time.
              • Record: found
              • Abstract: found
              • Article: not found

              An Agenda for Purely Confirmatory Research.

              The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology's academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result-a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label "confirmatory," and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled "exploratory." We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.

                Author and article information

                F1000Research (London, UK )
                11 April 2016
                : 5
                [1 ]Department of Earth Science and Engineering, Imperial College London, London, UK
                [2 ]Earth and Life Institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium
                [3 ]Medical Biotechnology Center, VIB, Ghent, Belgium
                [4 ]Department of Biochemistry, Ghent University, Ghent, Belgium
                [5 ]University Library System, University of Pittsburgh, Pittsburgh, PA, USA
                [6 ]Department of Methodology and Statistics, Tilburg University, Tilburg, Netherlands
                [1 ]School of Psychology, Cardiff University, Cardiff, UK
                [1 ]Department of Learning and Teaching Enhancement, Edinburgh Napier University, Edinburgh, UK
                [1 ]Manship School of Mass Communication, Louisiana State University, Baton Rouge, LA, USA
                [1 ]Berkman Center for Internet & Society, Harvard University, Cambridge, MA, USA
                [1 ]Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
                Author notes

                All authors contributed equally to the writing of this manuscript using the Overleaf collaborative writing platform.

                Competing interests: JPT currently blogs for the PLOS Paleo Community, and works for ScienceOpen. CHJH is a Center for Open Science ambassador. DCJ and FW are members of the Open Access Working Group of EURODOC. PM is a Research Data Alliance member. LBC works for the University of Pittsburgh, which has an Open Access library publishing department. All views presented here are strictly personal.

                Competing interests: No competing interests were disclosed.

                Competing interests: No competing interests were disclosed.

                Competing interests: No competing interests were disclosed.

                Competing interests: No competing interests were disclosed.

                Competing interests: Non-financial: I'm an Open Access advocate, and so I have a vested ideological interest in seeing papers like this succeed and reach wide audiences. However, if anything, I believe that makes my peer review more critical, as I want this paper to be the best paper it can be.

                Copyright: © 2016 Tennant JP et al.

                This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                Funded by: European Commission Horizon 2020 Programme
                Award ID: 634107 (PHC32-2014) ‘MULTIMOT’
                This research was partly funded by the Belgian National Fund for Scientific Research through a FRIA grant. PM acknowledges support from the European Commission Horizon 2020 Programme under Grant Agreement 634107 (PHC32-2014) ‘MULTIMOT’.
                The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Data Sharing
                Publishing & Peer Review


                Comment on this article