Blog
About

  • Record: found
  • Abstract: found
  • Article: not found

Inter-rater reliability of nursing home quality indicators in the U.S

Read this article at

ScienceOpenPublisherPMC
Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

      Abstract

      BackgroundIn the US, Quality Indicators (QI's) profiling and comparing the performance of hospitals, health plans, nursing homes and physicians are routinely published for consumer review. We report the results of the largest study of inter-rater reliability done on nursing home assessments which generate the data used to derive publicly reported nursing home quality indicators.MethodsWe sampled nursing homes in 6 states, selecting up to 30 residents per facility who were observed and assessed by research nurses on 100 clinical assessment elements contained in the Minimum Data Set (MDS) and compared these with the most recent assessment in the record done by facility nurses. Kappa statistics were generated for all data items and derived for 22 QI's over the entire sample and for each facility. Finally, facilities with many QI's with poor Kappa levels were compared to those with many QI's with excellent Kappa levels on selected characteristics.ResultsA total of 462 facilities in 6 states were approached and 219 agreed to participate, yielding a response rate of 47.4%. A total of 5758 residents were included in the inter-rater reliability analyses, around 27.5 per facility. Patients resembled the traditional nursing home resident, only 43.9% were continent of urine and only 25.2% were rated as likely to be discharged within the next 30 days.Results of resident level comparative analyses reveal high inter-rater reliability levels (most items >.75). Using the research nurses as the "gold standard", we compared composite quality indicators based on their ratings with those based on facility nurses. All but two QI's have adequate Kappa levels and 4 QI's have average Kappa values in excess of .80. We found that 16% of participating facilities performed poorly (Kappa <.4) on more than 6 of the 22 QI's while 18% of facilities performed well (Kappa >.75) on 12 or more QI's. No facility characteristics were related to reliability of the data on which Qis are based.ConclusionWhile a few QI's being used for public reporting have limited reliability as measured in US nursing homes today, the vast majority of QI's are measured reliably across the majority of nursing facilities. Although information about the average facility is reliable, how the public can identify those facilities whose data can be trusted and whose cannot remains a challenge.

      Related collections

      Most cited references 38

      • Record: found
      • Abstract: found
      • Article: not found

      High agreement but low kappa: I. The problems of two paradoxes.

      In a fourfold table showing binary agreement of two observers, the observed proportion of agreement, p0, can be paradoxically altered by the chance-corrected ratio that creates kappa as an index of concordance. In one paradox, a high value of p0 can be drastically lowered by a substantial imbalance in the table's marginal totals either vertically or horizontally. In the second pardox, kappa will be higher with an asymmetrical rather than symmetrical imbalanced in marginal totals, and with imperfect rather than perfect symmetry in the imbalance. An adjustment that substitutes kappa max for kappa does not repair either problem, and seems to make the second one worse.
        Bookmark
        • Record: found
        • Abstract: found
        • Article: not found

        Designing the national resident assessment instrument for nursing homes.

        In response to the Omnibus Reconciliation Act of 1987 mandate for the development of a national resident assessment system for nursing facilities, a consortium of professionals developed the first major component of this system, the Minimum Data Set (MDS) for Resident Assessment and Care Screening. A two-state field trial tested the reliability of individual assessment items, the overall performance of the instrument, and the time involved in its application. The trial demonstrated reasonable reliability for 55% of the items and pinpointed redundancy of items and initial design of scales. On the basis of these analyses and clinical input, 40% of the original items were kept, 20% dropped, and 40% altered. The MDS provides a structure and language in which to understand long-term care, design care plans, evaluate quality, and describe the nursing facility population for planning and policy efforts.
          Bookmark
          • Record: found
          • Abstract: not found
          • Article: not found

          Coefficient of agreement for nominal scales

           J. I. Cohen,  JA COHEN,  Cohen (1960)
            Bookmark

            Author and article information

            Affiliations
            [1 ]Brown University Department of Community Health & Center for Gerontology and Health Care Research, Providence, RI., USA
            [2 ]Hebrew Rehabilitation Center for Aged, Research and Training Center, Boston, Mass., USA
            [3 ]Abt Associates, Inc., Cambridge, Mass., USA
            Contributors
            Journal
            BMC Health Serv Res
            BMC Health Services Research
            BioMed Central (London )
            1472-6963
            2003
            4 November 2003
            : 3
            : 20
            280691
            1472-6963-3-20
            14596684
            10.1186/1472-6963-3-20
            Copyright © 2003 Mor et al; licensee BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL.
            Categories
            Research Article

            Health & Social care

            Comments

            Comment on this article