87
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Bias, prevalence and kappa

      , ,
      Journal of Clinical Epidemiology
      Elsevier BV

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Since the introduction of Cohen's kappa as a chance-adjusted measure of agreement between two observers, several "paradoxes" in its interpretation have been pointed out. The difficulties occur because kappa not only measures agreement but is also affected in complex ways by the presence of bias between observers and by the distributions of data across the categories that are used ("prevalence"). In this paper, new indices that provide independent measures of bias and prevalence, as well as of observed agreement, are defined and a simple formula is derived that expresses kappa in terms of these three indices. When comparisons are made between agreement studies it can be misleading to report kappa values alone, and it is recommended that researchers also include quantitative indicators of bias and prevalence.

          Related collections

          Author and article information

          Journal
          Journal of Clinical Epidemiology
          Journal of Clinical Epidemiology
          Elsevier BV
          08954356
          May 1993
          May 1993
          : 46
          : 5
          : 423-429
          Article
          10.1016/0895-4356(93)90018-V
          8501467
          6e1b76e2-588c-4349-83fd-6e5030b0c0e2
          © 1993

          https://www.elsevier.com/tdm/userlicense/1.0/

          History

          Comments

          Comment on this article