17
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Irrelevance of Reliability Coefficients to Accountability Systems: Statistical Disconnect in Kane-Staiger

      Read this article at

      ScienceOpenPublisherDOAJ
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The body of this report consists of a fairly thorough effort to discredit the empirical assertions and methodological prescriptions of Kane and Staiger (KS). The four main sections of content that follow this (lengthy) Preamble are:Section 1 Accuracy Of Group SummariesExact results are obtained for the accuracy of grade-level scores (forn=68) which are then compared with the reliability-style calculationsreported in KS for North Carolina data. Also, accuracy properties ofCalifornia API school-level scores are presented, and to compare with KS assertions, the reliability coefficients for these scores are calculated. KS find high volatility even when accuracy is very good, and KS find extreme absence of volatility even when accuracy is moderate to poor.Section 2 Accuracy of ImprovementPrecision of improvement is contrasted with KS-style reliability ofimprovement. Analytic and empirical examples for accuracy of improvement reinforce the basic message: reliability is not precision. Most importantly, precision, which is what matters, can be low, and reliability still be high. And vice versa. Also, school-level California API data display no relation between amount of improvement and uncertainty in the scores (Figures 2.1-2.3), refuting a key KS assertion about school size.Section 3 Persistence of Change.The KS correlation of consecutive changes--and thus the KS estimate of"proportion of variance in changes due to nonpersistent factors"--isshown to be a function of the reliability of the difference score. KSdeterminations of persistence of change are shown to be without valuein accountability systems. Common-sense definitions of consistency ofimprovement and empirical demonstrations using artificial data arepresented.Section 4 California Academic Performance Index Award ProgramsDiscussion of appropriate methods for describing the properties of Award Programs (e.g., determinations of false positive and false negatives) are contrasted with the incorrect empirical assertions and methodologies in KS. Counterexamples to each of the KS "Lessons" are presented in detail. The focus is on the effect of school size, to link with the accuracy results of previous sections.

          Related collections

          Author and article information

          Journal
          Nonpartisan Education Review
          Nonpartisan Education Group
          01 April 2005
          : 1
          : 4
          : 1-78
          Article
          c57e61bb36ca48c39f1a3c0564a3bace
          b147cf43-c178-4d6c-bec7-48a870be31ca

          This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

          History
          Categories
          Education (General)
          L7-991
          Education
          L

          Assessment, Evaluation & Research methods,Education & Public policy,Educational research & Statistics,Information & Library science,Linguistics & Semiotics,General education
          Rogosa,education,policy,Kane,Staiger,API,STAR,California state testing,test score volatility,reliability coefficients,accountability,assessment,school,AYP,accountability index

          Comments

          Comment on this article