36
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Validity of Peer Review in a General Medicine Journal

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          All the opinions in this article are those of the authors and should not be construed to reflect, in any way, those of the Department of Veterans Affairs.

          Background

          Our study purpose was to assess the predictive validity of reviewer quality ratings and editorial decisions in a general medicine journal.

          Methods

          Submissions to the Journal of General Internal Medicine (JGIM) between July 2004 and June 2005 were included. We abstracted JGIM peer review quality ratings, verified the publication status of all articles and calculated an impact factor for published articles (Rw) by dividing the 3-year citation rate by the average for this group of papers; an Rw>1 indicates a greater than average impact.

          Results

          Of 507 submissions, 128 (25%) were published in JGIM, 331 rejected (128 with review) and 48 were either not resubmitted after revision was requested or were withdrawn by the author. Of 331 rejections, 243 were published elsewhere. Articles published in JGIM had a higher citation rate than those published elsewhere (Rw: 1.6 vs. 1.1, p = 0.002). Reviewer quality ratings of article quality had good internal consistency and reviewer recommendations markedly influenced publication decisions. There was no quality rating cutpoint that accurately distinguished high from low impact articles. There was a stepwise increase in Rw for articles rejected without review, rejected after review or accepted by JGIM (Rw 0.60 vs. 0.87 vs. 1.56, p<0.0005). However, there was low agreement between reviewers for quality ratings and publication recommendations. The editorial publication decision accurately discriminated high and low impact articles in 68% of submissions. We found evidence of better accuracy with a greater number of reviewers.

          Conclusions

          The peer review process largely succeeds in selecting high impact articles and dispatching lower impact ones, but the process is far from perfect. While the inter-rater reliability between individual reviewers is low, the accuracy of sorting is improved with a greater number of reviewers.

          Related collections

          Most cited references20

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care?

          Background Editorial peer review is universally used but little studied. We examined the relationship between external reviewers' recommendations and the editorial outcome of manuscripts undergoing external peer-review at the Journal of General Internal Medicine (JGIM). Methodology/Principal Findings We examined reviewer recommendations and editors' decisions at JGIM between 2004 and 2008. For manuscripts undergoing peer review, we calculated chance-corrected agreement among reviewers on recommendations to reject versus accept or revise. Using mixed effects logistic regression models, we estimated intra-class correlation coefficients (ICC) at the reviewer and manuscript level. Finally, we examined the probability of rejection in relation to reviewer agreement and disagreement. The 2264 manuscripts sent for external review during the study period received 5881 reviews provided by 2916 reviewers; 28% of reviews recommended rejection. Chance corrected agreement (kappa statistic) on rejection among reviewers was 0.11 (p<.01). In mixed effects models adjusting for study year and manuscript type, the reviewer-level ICC was 0.23 (95% confidence interval [CI], 0.19–0.29) and the manuscript-level ICC was 0.17 (95% CI, 0.12–0.22). The editors' overall rejection rate was 48%: 88% when all reviewers for a manuscript agreed on rejection (7% of manuscripts) and 20% when all reviewers agreed that the manuscript should not be rejected (48% of manuscripts) (p<0.01). Conclusions/Significance Reviewers at JGIM agreed on recommendations to reject vs. accept/revise at levels barely beyond chance, yet editors placed considerable weight on reviewers' recommendations. Efforts are needed to improve the reliability of the peer-review process while helping editors understand the limitations of reviewers' recommendations.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Reliability of editors' subjective quality ratings of peer reviews of manuscripts.

            Quality of reviewers is crucial to journal quality, but there are usually too many for editors to know them all personally. A reliable method of rating them (for education and monitoring) is needed. Whether editors' quality ratings of peer reviewers are reliable and how they compare with other performance measures. A 3.5-year prospective observational study. Peer-reviewed journal. All editors and peer reviewers who reviewed at least 3 manuscripts. Reviewer quality ratings, individual reviewer rate of recommendation for acceptance, congruence between reviewer recommendation and editorial decision (decision congruence), and accuracy in reporting flaws in a masked test manuscript. Editors rated the quality of each review on a subjective 1 to 5 scale. A total of 4161 reviews of 973 manuscripts by 395 reviewers were studied. The within-reviewer intraclass correlation was 0.44 (P<.001), indicating that 20% of the variance seen in the review ratings was attributable to the reviewer. Intraclass correlations for editor and manuscript were only 0.24 and 0.12, respectively. Reviewer average quality ratings correlated poorly with the rate of recommendation for acceptance (R=-0.34) and congruence with editorial decision (R=0.26). Among 124 reviewers of the fictitious manuscript, the mean quality rating for each reviewer was modestly correlated with the number of flaws they reported (R=0.53). Highly rated reviewers reported twice as many flaws as poorly rated reviewers. Subjective editor ratings of individual reviewers were moderately reliable and correlated with reviewer ability to report manuscript flaws. Individual reviewer rate of recommendation for acceptance and decision congruence might be thought to be markers of a discriminating (ie, high-quality) reviewer, but these variables were poorly correlated with editors' ratings of review quality or the reviewer's ability to detect flaws in a fictitious manuscript. Therefore, they cannot be substituted for actual quality ratings by editors.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Readers' evaluation of effect of peer review and editing on quality of articles in the Nederlands Tijdschrift voor Geneeskunde.

              Academic biomedical journals use peer review and editing to help to select and improve the quality of articles. We have investigated whether articles accepted by the Nederlands Tijdschrift voor Geneeskunde, the Dutch Journal of Medicine, were improved after peer review and editing (post-acceptance scientific and copy editing). 400 readers of the journal (100 each of medical students, recent medical graduates, general practitioners, and specialists) were invited to participate in a questionnaire survey. The first 25 from each group who agreed to participate were included. We posted a pack containing a set of identically appearing typescripts (ie, blinding) of the submitted, accepted, and published versions of 50 articles that had been published in Ned Tijdschr Geneeskd. Each evaluator received two of the sets of versions, and each set was evaluated by one person from each group. The package also included two questionnaires: the first was used to compare the submitted with the accepted version (25 questions), the second compared the accepted with the published version (17 questions). The questions were answered on five-point scales, and were about the quality of the articles or were general/overall scores. We analysed the data as scores of 3-5 (ie, improvement) versus 1-2. After peer review, the quality in 14 of 23 questions (61%) was significantly improved (p = 0.03 or smaller). In particular, the overall score and general medical value were significantly improved (p = 0.00001 for each). Editing led to significant improvement in 11 of 16 questions (69%, p = 0.017 or smaller), and especially in style and readability (p = 0.001 and p = 0.004). Generally, we found no differences between the scores of the four categories of evaluators. 72% of the evaluators correctly identified which version was which. Evaluations by readers of the Ned Tijdschr Geneeskd indicated significant improvement of published articles after both peer review and editing. We think that peer review and editing are worthwhile tasks. We also think that possible biases would have had a negligible effect on our results (including the fact that we selected the first 25 evaluators who responded, that some evaluators may have read the published version, and that one questionnaire may have looked more scientific than the other, more editorial one).
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, USA )
                1932-6203
                2011
                25 July 2011
                : 6
                : 7
                : e22475
                Affiliations
                [1 ]Division of General Medicine, Zablocki VA Medical Center, Milwaukee, Wisconsin, United States of America
                [2 ]Division of General Medicine, University of California Davis, Sacramento, California, United States of America
                Johns Hopkins University, United States of America
                Author notes

                Conceived and designed the experiments: JLJ KF JR MS RLK. Performed the experiments: JLJ KF JR MS. Analyzed the data: JLJ. Contributed reagents/materials/analysis tools: RLK MS. Wrote the paper: JLJ KF JR MS RLK.

                Article
                PONE-D-11-02904
                10.1371/journal.pone.0022475
                3143147
                21799867
                55f81c9c-43c1-4b5c-81c0-390c234e2002
                This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.
                History
                : 8 February 2011
                : 28 June 2011
                Page count
                Pages: 8
                Categories
                Research Article
                Medicine
                Non-Clinical Medicine
                Academic Medicine
                Communication in Health Care
                Medical Journals
                Science Policy
                Research Assessment
                Peer Review
                Publication Practices

                Uncategorized
                Uncategorized

                Comments

                Comment on this article