258
views
0
recommends
+1 Recommend
0 collections
    8
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Interrater reliability: the kappa statistic

      research-article
      Biochemia Medica
      Croatian Society of Medical Biochemistry and Laboratory Medicine
      kappa, reliability, rater, interrater

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.

          Related collections

          Most cited references11

          • Record: found
          • Abstract: not found
          • Article: not found

          The Kappa Statistic: A Second Look

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Meta-analysis of Pap test accuracy.

            A literature search identified 62 studies published by August 1992 comparing Papanicolaou (Pap) test results with histology. Critical appraisal revealed that 82% of these had potential for verification bias and that only 37% stated that cytology and histology were independently assessed. Estimates of sensitivity and specificity ranged from 11 to 99% and 14 to 97%, respectively, and were highly negatively correlated (r = -0.63). Meta-analysis was used to combine data from 59 studies to estimate the accuracy of the Pap test using a summary receiver operating characteristic curve and to examine the effect of study quality. The summary receiver operating characteristic curve suggests that the Pap test may be unable to achieve concurrently high sensitivity and specificity. For example, specificity in the 90-95% range corresponds to sensitivity in the 20-35% range. Pap test accuracy was not associated with reported study characteristics or dimensions of quality. Future primary studies should pay more attention to methodologic standards for the conduct and reporting of diagnostic test evaluations.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Intrarater Reliability of Dual-Energy X-Ray Absorptiometry–Based Measures of Vertebral Height in Postmenopausal Women

                Bookmark

                Author and article information

                Journal
                Biochem Med (Zagreb)
                Biochem Med (Zagreb)
                Biochemia Medica
                Biochemia Medica
                Croatian Society of Medical Biochemistry and Laboratory Medicine
                1330-0962
                1846-7482
                15 October 2012
                October 2012
                : 22
                : 3
                : 276-282
                Affiliations
                Department of Nursing, National University, Aero Court, San Diego, California
                Author notes
                Corresponding author: mchugh8688@ 123456gmail.com
                Article
                biochem_med-22-3-276-4
                3900052
                23092060
                36745228-f2c4-48aa-9004-c5fff2f07c91
                ©Copyright by Croatian Society of Medical Biochemistry and Laboratory Medicine

                This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 17 August 2012
                : 29 August 2012
                Categories
                Lessons in Biostatistics

                kappa,reliability,rater,interrater
                kappa, reliability, rater, interrater

                Comments

                Comment on this article