374
views
1
recommends
+1 Recommend
1 collections
    1
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The need for post-publication peer review in plant science publishing

      editorial

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The discussion among scientists about the quality of a published paper should be a constant, dynamic process, even beyond the act of publication. A published paper should not be the final step in the half-life of a scientific manuscript and critical analysis, post-publication, through post-publication peer review (PPPR), should be part of a new and dynamic process that should be embraced by scientists, editors, and publishers alike (Hunter, 2012) as one form of ensuring scientific and academic integrity (Teixeira da Silva, 2013a). Traditional scientific publishing relies primarily on a three-step process: (1) submit; (2) peer review and edit; (3) publish. However, each of these steps has clear documented problems. The first problem related to step 1 involves the intrinsic honesty of the scientist and is the basis upon which the success of all ensuing publishing steps depend. Issues such as appropriate authorship, correct data representation and its faithful representation without manipulation all form part of the first requirement. The fact that this base of honesty has been breached in many instances has forced publishers to insist on increasingly complex signed declarations upon submission of a paper pertaining to the originality of data, the single nature of submission, and conflicts of interest (COIs). Up until submission, trust and honesty lie in the hands of scientists and authors. Apart from such signed declarations, it is rare for publishers to run detailed background checks on authorship, affiliations, or COIs prior to peer review primarily because such aspects are difficult and time-consuming to investigate or verify, especially with a global authorship. More recently, publishers tend to run detailed checks on plagiarism or duplication as a result of more data-bases and stronger web-search engines, but this may fail to reveal duplicate submissions. Therefore, although there has been an increase in the level of verification by publishers in the first step, it is still far from being a fail-safe system. The moment a publisher receives a manuscript for peer review, responsibility is transferred from the scientist to the editors and the publisher (Teixeira da Silva, 2013b). Unlike step one, in which trust was earned (from the author) by the publisher, in step 2, trust is now earned (from the publisher) by the author and the scientific community. Assuming that the author has been honest in step 1, the author would expect some basic responsibilities by the publisher, but practically speaking most likely by the editor-in-chief (EIC) and/or editor board and peer reviewers. Such responsibilities would primarily include: (a) an unbiased peer review (Chase, 2013) of the paper within a reasonable amount of time which should ideally involve a double-blind review in which the identity of the authors is unknown to peer reviewers and vice versa to avoid potential COIs; (b) the ability to protect personal information during the peer review; (c) the ability to implement quality control (QC) related to various issues (data, language, structure, literature representation) and to ensure that all peer and editorial requirements made of authors are met. Regarding the third and last aspect, misrepresentation of the literature, or lack of a strict control of the published literature on the part of authors and editors, has led to the establishment of a new concept, snub publishing (Teixeira da Silva, 2013b). A first-ever case study in the plant sciences involving a PPPR of the Anthurium tissue culture literature deserves particular attention since it reveals how the loss of honesty and/or QC can result in the “academic corruption” of the literature, thus weakening the trust in its findings (Teixeira da Silva, 2013c, in press). Finally, once the peer review has been completed, and the paper has been accepted for publication, the responsibilities of the editors, EIC, or publisher do not end there. Accurate representation of the final data set, orderly and structured display of tables and figures lie exclusively in the hands of the publisher, even when authors have been sent a proof. Debate regarding the costs of publishing, intellectual property, and open access (OA) vs. traditional publishing, although important, is marginal to the responsibilities focused on in this paper. However, central to the success of PPPR would be the unfettered access of the public and scientists to a published work for critical analysis. Without in fact considering issues such as metrics or the debate of the impact factor, which add “noise” to the de facto quality of a scientific paper, only two key aspects count when discussing the quality of a paper: (a) the originality of the data set and study; (b) the ability of the author–editor–peer triad to detect and correct as many errors as possible to ensure academic integrity. Understandably, different levels of research and of QC by peer reviewers, editors, or publishers from different cultures may lead to multiple interpretations of issues related to publishing quality and/or ethics such as authorship, self-plagiarism, or duplication. However, reliability of and responsibility associated with the traditional three-step publishing process can be eroded or lost. How then does the scientific community correct or improve the scholarly record once the publishing process has traditionally been perceived to be complete, i.e., a published paper? Once again, this relies on the responsibility of the authors, editors, peers, and publishers. If this first step of the process is flawed, then most likely all ensuing latter steps of QC will also be flawed. For example, a scientist that has falsified data might not necessarily be honest or forthright about their dishonesty, even if errors are discovered in a PPPR. Or an editor, EIC or peer reviewer that has shown bias or poor QC during the peer review process might not wish to admit—following PPPR—to error and failure in the process of academic QC. Finally, a publisher that has assumed that all aspects leading up to the publication of a paper have been well conducted, namely ethical and academic standards, either because it was led to believe that integrity was in place, or because it wanted to believe that such integrity existed, might not be willing to assume responsibility for the entire process for which in fact it was originally responsible. By doing so, a publisher would in essence have to allow an across-the-board PPPR of any paper published in its journal fleet which could overload an already strained publishing process. How then can errors, fraud, or bias be judged if the key players are part of the problem? The only realistic solution is through PPPR. Although still at a nascent phase for plant science, PPPR can and should only be conducted by specialists in the field. This would allow for a peer who should be independent of the entire process related to a paper's publication history and is thus not be influenced by bias, to critically judge some or all aspects of that paper that have undermined QC as relates to the traditional publishing process. What the PPPR reviewer in fact does is to step in to cover the QC gaps that the authors, peer reviewers, editors, EICs, or publishers have in fact failed to address. Understandably, a paper from a predatory OA publisher might present multiple linguistic, academic, or scientific errors (Bohannon, 2013) relative to a paper published in leading, respected, and/or established plant science journals. Yet, the need to correct the literature must always exist. Depending on the number and level of errors and on the proof of partial or full duplication of text (plagiarism), or self-plagiarism of data, text, figures, or tables, all of which weaken the academic integrity of the scientific literature, the publisher has the responsibility to retract a paper, issue an expression of concern or an erratum, even if the authors are in disagreement. The true risk to academic and publishing integrity ultimately lies when non-academic or scientifically unsound literature is referenced by scientists in other scientific papers. When the level of errors is limited, or where issues, concepts, or data are open to debate or multiple interpretation, a PPPR challenge should also be published alongside the original paper (e.g., an OA PDF file) or as a letter to the editor. Ideally, such a PPPR should include the PPPR criticisms or queries, including verifiable proof, a response to those claims by the authors, and the formal position of the EIC and/or editors and publisher. To be fair and balanced, where data is in question, original data sets should be published as annexes, as should the original peer reviewers' reports. Even though there are still extremely few retractions in the plant science literature, there are still no rules or precedents for PPPR in plant science publishing. This concept should, however, be rapidly accepted by the plant science community and integrated as an established industry norm. This will involve a concerted effort by peers to dedicate freely of their time to carefully scrutinize the minutiae of published papers that would be based on a conscientious desire to correct the scientific literature. Ideally, PPPR reports should involve line-by-line or even section-by-section analyses, wherever warranted, rather than a report with broad comments and/or unspecific statements that do not provide detailed enlightenment about the actual errors or weaknesses. All parties should be given a fair opportunity and sufficient time to assess the claims and to respond to them. In cases where the publisher is reticent, the authors are uncooperative or the independent peer does not feel safe throughout the PPPR process, other alternatives exist. One, the subject of both praise and criticism, is anonymity. Using an anonymous report that is supported exclusively by facts, an anonymous PPPR reviewer can expose problems of a paper to authors, the EIC, editors, and publishers without the fear of reprisals (physical, professional, or psychological) by any of the parties the reviewer is questioning. It is then incumbent on the latter three parties to take such reports seriously and to launch and complete a detailed investigation into the claims. Incorrect challenges or misinterpretation by the PPPR should also be noted. Where the parties involved are unresponsive, alternative forms of PPPR are possible, preferably in OA format: (a) independent, self-published critiques; (b) scientific forums such as Retraction Watch or other similar blogs, PubPeer, or the new prototype PubMed Commons. Such forums for open critique of the literature will no doubt increase over time as PPPR becomes the new norm. The trend is absolutely clear. Unless peers who feel strongly about errors in the scientific literature step forward and make a pro-active, concerted effort, unless publishers who claim ethical standards and peer review support the notion of open-ended publishing and QC through PPPR, and unless we embrace, as plant scientists, that there are serious problems in plant science and in plant science publishing, and that academic and ethical errors in the literature need to be urgently addressed, the efforts of those to publish good, honest, and valuable research will very rapidly be dwarfed by the ever-expanding pool of fraud and/or false information that is becoming increasingly abundant in the literature, but as yet poorly critiqued and quantified.

          Related collections

          Most cited references2

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Post-Publication Peer Review: Opening Up Scientific Conversation

          Conventional Peer Review: Rights and Wrongs Peer review is broken. We have all heard that phrase many times in recent years. It’s become a truism, a shorthand complaint about the status quo that rarely extends into a proposal for change. And even those who do not believe standard peer review is beyond repair acknowledge that there are problems; everyone can see the cracks. So what’s wrong? From an author’s point of view, a lot. Peer review is slow; it delays publication. It’s almost always secret; authors do not know who is reviewing their work – perhaps an ally but, equally, perhaps a competitor. It can block ingenuity; think of the classic case of Lynn Margulis and the 15 or so journals that rejected her ground-breaking article “On the origin of mitosing cells” (Sagan, 1967) before it was finally accepted by The Journal of Theoretical Biology. And there’s a lot wrong for reviewers too: what proportion of referee reports are second, third, or even fourth round reviews? A referee’s hard work may be contributing nothing new to an author who would rather take his or her chances with another journal than do the extra work suggested by reviewers for journals one to three. Does conventional peer review work for publishers? Well, yes and no. Yes, at top-flight journals like Nature or NEJM peer review is a gate keeper that helps guarantee publication of only the most interesting articles, and yes, in theory, it helps guard against the publication of flawed work, but it’s expensive – even though reviewers work for free – and it’s time-consuming. Nature or NEJM review thousands of papers each year that would not make it into their journals; for third-, fourth-, or fifth-tier journals, somewhere further down the inevitable cascade, referees will often be doing work that has been done already on an article that was written months ago. If standard peer review is intended to help ensure that an article is good enough to be published, is it working? And in this context, what does “good enough” even mean? Since most papers will eventually be published, cascading until they find a journal, that means that most papers are good enough for someone and peer review’s supposed qualitative gatekeeper role is not supportable. The impact of peer review on the publication of an article is not so much a question of yes or no, it’s more likely to be a question of when and where. Yet even acknowledging the flaws, redundancies, and costs of the conventional peer review system, it is clear that we need peer review. The more specialized science becomes the more we must rely on experts to help us navigate the multiplicity of subject areas we are not expert in ourselves. Peer reviewers are those experts and we depend on the refereeing process to protect us from sloppy work and invalid conclusions. So peer review is important but the way it happens is problematic At F1000, we believe that most of the weaknesses of standard peer review can be linked to two core issues, first that it is conducted pre-publication and second that it is secret. Pre-publication peer review allows journals and reviewers to delay, filter, and interrupt the essential conversation of science, and secrecy makes these problems impossible to resolve. Post-Publication Peer Review: Two Models from Faculty Of 1000 A little background: faculty of 1000 began in 2002 with a post-publication review service called F1000 Biology. Its remit was (and still is) to work with named experts to identify and recommend the most interesting papers published across 24 different subject areas in biology. In 2006 F1000 Medicine joined it – with the same aim, more experts and coverage of 20 medical specialties. We merged the two services in 2010, and biology and medicine are now both covered at F1000.com. Since then, we have launched F1000 Posters, an open access repository for posters and presentations – again in biology and medicine – and we are now in the early stages of launching our new open access, post-publication peer review journal, F1000 Research. Faculty of 1000 practices two forms of post-publication peer review: primary, open refereeing of articles after they are published in F1000 Research, and secondary peer review of the best already-refereed articles, published in any biology or medicine journal, at F1000.com. Both are illustrations of Clay Shirky’s “publish then filter” model (Shirky, 2008) and each adds value to scientific discourse in its own way. I will describe our secondary post-publication review process first. Secondary Post-Publication Peer Review The F1000 article recommendation service applies a layer of positive filtering on top of traditionally peer reviewed literature; we review already-published biology and medicine in order to identify and promote the best work. Our 10,000 named Faculty Members and their Associates select articles that impress them, regardless of source, and write brief recommendations explaining what makes the work significant and putting the science in perspective. These recommendations and comments, along with links to the original articles, are published on F1000.com. Why is this a useful thing to do? It’s useful because the vast volume of material published each year (or each day) makes it difficult for researchers to stay up to date with their own specialized fields, let alone with peripheral fields – all those other subject areas they should be keeping an eye on. Sure, you can search for articles and find, more or less, what you are looking for, but it’s helpful to have access to expert opinion for timely guidance on what’s especially significant and why. The fact that F1000’s reviewers are named puts their opinions in perspective. No one has ever suggested that our F1000 Faculty Members should conduct this form of post-publication review anonymously. Primary Post-Publication Peer Review F1000 Research, F1000’s new primary open access publishing program in biology and medicine, publishes immediately, and offers fully open, post-publication peer review. We published our first articles in mid-July and are planning for a full launch at the end of this year. Articles submitted to F1000 Research are first processed through an in-house sanity check and then, assuming they pass, published immediately. Post-publication they are subjected to formal peer review. Referees’ reports are published on the site and all referees are named. The most important task for our referees is to tell us immediately whether or not an article is good science. We do not need to know if it’s exciting, or novel, or ground-breaking, we simply want to know that it’s valid; that it’s sensible work, carefully done. We expect the vast majority of submissions to be approved as good science. If it is good science, an article will be marked as such. If it’s not, or if it’s good science but the referee has reservations, we require that the referee add a report describing the problems and – if applicable – suggesting improvements. We encourage, but do not require, referees to add reports to articles they have approved as good science. Authors have the opportunity to respond to a referee’s comments and are encouraged to update their articles and publish revised versions on the site. All versions are separately citable. All articles and all versions are clearly marked with their referee status and articles that have not yet been refereed are labeled as “Awaiting Review.” The strengths of this model are that it’s fast, all good science can be published immediately and become part of the record to the benefit of scientists and others worldwide; it’s fair, publication cannot be blocked or slowed by the refereeing process; and it’s open, and openness discourages bias. We do not see many weaknesses or risks with this model ourselves – standard peer review has few fans and is overdue for change – but then you might expect us to say that. We do understand though that there are concerns. These include: –Is there a risk that F1000 Research will publish junk?: No, there is not. It will publish good science and let the community decide what the ultimate value of a specific piece of work is. As an aside, we expect that less junk – however one might define that term in science – will be submitted to F1000 Research than to conventional journals because few people will want to see a severely negative review of their work become part of the public record. Because F1000 Research will publish immediately then review openly, sloppy work will be publicly described as such. –OK, if not junk then uninteresting science: Maybe, maybe not. Uninteresting science is still science, and we believe it should be published. There is a reason for top-line journals to sharply restrict what they publish, that’s how they create and maintain their identities and Impact Factors, but it’s hard to argue that such restrictions on scientific discourse are, overall, a good thing. We believe they are not. Valid science should be published. –No reviewer will want to be openly negative about another scientist’s work: Having now published our first articles we are seeing in real time that this is not the case. Referees are happy to criticize and authors are happy to be able to respond, to present their case. And because everything is happening in the open, interested scientists can, for the first time, read the back-and-forth and make up their own minds. F1000 Research’s version of “publish then filter” is an innovation in life-science publishing and no doubt additional concerns will arise as we fine-tune our model. However, it’s clear to us that the research community as a whole is more than ready to contemplate and, we believe, support real change. Complaints about conventional, pre-publication, closed peer review systems are mounting and the risks associated with our “publish first/referee openly later” system seem relatively trivial when compared with the increasing expense and frustration associated with the status quo. We were the inventors of and original advocates for open access. We created Biomed Central, helped set up PubMed Central, and fought the publishing establishment for years to prove that open access can work, that it can be a profitable alternative to standard subscription models. F1000 Research and its novel publishing model take openness to the next level. Open access removes barriers for readers. Open, post-publication refereeing removes barriers for readers and authors alike, and it refocuses the role of peer review from, at its worst, a behind-the-scenes variety of censorship to, at its best, the process of expert criticism and advice that has always been its core and upon which the progress of science depends.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            The Shadow of Bias

            Recent philosophical dissection of the scientific method can be caricatured as a polar debate between Karl Popper's sober view of objective development and falsification of hypotheses and Thomas Kuhn's more glamorous espousal of a role for ideology and subjectivity; “real science” as performed by the authors of this and other journals is probably a rich mix of the two. But while scientific ideology is arguably a necessary ingredient for paradigm shifts (a phrase coined by Kuhn himself), it has an unfortunate flipside. Although cases of overt scientific fraud are thankfully quite rare and actively policed by scientists, administrators, and funding agencies, there are many more subtle ways by which scientific results are influenced by ideologies, leading to bias in what is reported in the literature. It is well known, for example, that results in support of a given hypothesis are more likely to be published in a “higher impact” journal than are negative results, leading to what's known as “publication bias.” Even within a study, bias can emerge from the choice of experimental design and/or the presentation and analysis of the results. Such bias is clearly counterproductive to scientific progress, but few scientists can reasonably claim to have never succumbed to at least some bias in their studies. In biomedical science, animal models are essential to triage possible therapeutic interventions for human diseases prior to possible clinical trials. However, such studies might also be particularly prone to biases from scientists who have personal, professional, and financial incentives to publish important and exciting results. As a result, dozens of studies are often published that examine the effect of a particular intervention on animal models. Because experimental outcomes of the same treatment are often quite variable, meta-analysis can be used to combine all studies of that intervention into a single analysis. Meta-analysis takes the magnitude of difference between treatments (e.g., drug versus placebo), known as an effect size, from a single study, and then combines effect sizes between studies on the same topic (along with estimates of sample size and variance) to allow detection of the overall magnitude of effect. In this way, even if studies give somewhat conflicting answers to the same treatment, an overall effect among studies can be calculated to achieve a more conclusive answer. While meta-analysis is a powerful tool to overcome the variation among studies and arrive at an answer to a particular scientific question (e.g., does a particular intervention alleviate the symptoms of a disease?), it is less powerful in its ability to detect publication bias and the selective presentation of analyses. In the biomedical sciences, such biases not only slow the progression of science, but they could also result in bringing ineffective or harmful substances to clinical trial, creating considerable financial and health costs. Thus, it is important to understand just how rampant these biases are. In the current issue of PLOS Biology, Tsilidis and colleagues take the bold step of examining bias by employing a relatively new type of approach—a sort of meta-analysis of meta-analyses. This allowed them to assess whether the numbers of studies finding statistically significant effects of a biomedical intervention were higher than what would be expected if there were no bias. Specifically, they analyzed 160 separate meta-analyses comprising more than 1,000 studies that used animal models to evaluate the efficacy of interventions of six major neurological disorders—Alzheimer disease, multiple sclerosis, two types of stroke, Parkinson disease, and spinal cord injury—4,445 comparisons in all. A large proportion of these meta-analyses (nearly 70%) reported an overall positive effect of the tested interventions on the affliction. However, most of these meta-analyses also reported a very large amount of variation among studies, indicating uncertainty about the true effect size. In addition, nearly half of the meta-analyses were influenced by a “small study effect,” where studies with smaller sample sizes had substantially different effect sizes than those with larger sample sizes. To take their analysis to the next level, Tsilidis and colleagues examined the amount of “excess significance,” which asks whether the observed number of studies in a meta-analysis that gave statistically significant results is greater than would be expected under a plausible scenario with no bias. To define plausibility, the authors took the study with the lowest standard error as being the most precise and thus closest to the true effect size. Over all 4,445 comparisons, the observed number of significant results (1,719) was nearly twice the expected number (919), indicating considerable bias. Such bias was present in studies on all six neurological disorders and when analyzing the data in a number of different ways, including relaxing the assumptions of the true effect size. In all, Tsilidis and colleagues suggested that only 30% of the 160 meta-analyses they examined showed a significant response to an intervention but had no small sample effects or excess significance. And of those, only eight had a sample size of more than 500 animals, leading the authors to conclude that a large proportion of biomedical studies, at least on these six important neurological disorders, were strongly biased towards finding larger effects of interventions than truly exist. From this, they make the important observation that although inherent differences between animals and humans certainly plays a role, biases towards finding positive effects might explain a significant number of cases where seemingly promising interventions from animal studies failed in clinical trials with humans. In the recipe for good science, a pinch of Kuhnian ideology allows paradigms to shift. However, the results of Tsilidis and colleagues emphasize that this pendulum can swing too far and that a healthy dose of Popperian falsifiability is necessary to restrict the inevitable creep of bias into scientific endeavor. With increasing numbers of humans afflicted with neurological disorders, millions of animals sacrificed in the name of research, and billions of dollars spent on health care, it is imperative that biomedical scientists take action to alleviate these biases. Tsilidis and colleagues advocate a number of such actions, including the development of standard reporting protocols, preregistration of experimental design, and provisioning of raw data to the broader community, all of which should allow more efficient development of disease interventions from animal models to clinical trials. Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, et al. (2013) Evaluation of Excess Significance Bias in Animal Studies of Neurological Diseases. doi:10.1371/journal.pbio.1001609
              Bookmark

              Author and article information

              Journal
              Front Plant Sci
              Front Plant Sci
              Front. Plant Sci.
              Frontiers in Plant Science
              Frontiers Media S.A.
              1664-462X
              04 December 2013
              2013
              : 4
              : 485
              Affiliations
              P. O. Box 7, Miki-cho post office, Ikenobe 3011-2 Kagawa-ken, 761-0799, Japan
              Author notes

              This article was submitted to Crop Science and Horticulture, a section of the journal Frontiers in Plant Science.

              Edited by: Rina Kamenetsky, The Volcani Center, Israel

              Article
              10.3389/fpls.2013.00485
              3850236
              24363658
              c90f7f03-ece7-483d-8584-3bf9d7339d12
              Copyright © 2013 Teixeira da Silva.

              This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

              History
              : 25 October 2013
              : 10 November 2013
              Page count
              Figures: 0, Tables: 0, Equations: 0, References: 7, Pages: 3, Words: 2109
              Categories
              Plant Science
              Opinion Article

              Plant science & Botany
              accountability,editorial responsibilities,transparency,bias,ethics,integrity,predatory publishing,snub publishing

              Comments

              The author describes very detailed the traditional pre-publication workflow of peer review in scholarly journals. That classical concept of quality assessment in academic publishing has been a perfect way to maintain a certain level of research for decades in the print age. However, the digital age and the rise of the internet and social networks have contributed a lot of opportunities to develop new and more advanced concepts of peer review: Post-publication peer review (PPPR) could be an opportunity to foster the academic discourse about new research among scientists. At variance to the classical gate-keeping function of pre-publication peer review, PPPR enables an open, transparent and non-static discussion of new results in science. The author develops the concept of PPPR for publishing results in plant science but one can easily extrapolate the findings of that paper to general science and other academic disciplines. That's why I do recommend reading.

              2015-04-30 23:34 UTC
              +1
              The author describes very detailed the traditional pre-publication workflow of peer review in scholarly journals. That classical concept of quality assessment in academic publishing has been a perfect way to maintain a certain level of research for decades in the print age. However, the digital age and the rise of the internet and social networks have contributed a lot of opportunities to develop new and more advanced concepts of peer review: Post-publication peer review (PPPR) could be an opportunity to foster the academic discourse about new research among scientists. At variance to the classical gate-keeping function of pre-publication peer review, PPPR enables an open, transparent and non-static discussion of new results in science. The author develops the concept of PPPR for publishing results in plant science but one can easily extrapolate the findings of that paper to general science and other academic disciplines. That's why I do recommend reading.
              2015-02-15 12:17 UTC
              +1

              Comment on this article