191
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson’s (r) and the Intraclass Correlation Coefficient (ICC), while selecting tools to measure change and inferring change as true. The authors present statistical methods that are part of the current approach to evaluate test–retest reliability of assessment tools and outcome measurements. Selected examples from a previous test–retest study are used to elucidate the added advantages of knowledge of the ME of an assessment tool in clinical decision making. The CR is computed in the same units as the assessment tool and sets the boundary of the minimal detectable true change that can be measured by the tool.

          Related collections

          Most cited references7

          • Record: found
          • Abstract: found
          • Article: not found

          Measuring agreement in method comparison studies.

          Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Smallest real difference, a link between reproducibility and responsiveness.

            The aim of this study is to show the relationship between test-retest reproducibility and responsiveness and to introduce the smallest real difference (SRD) approach, using the sickness impact profile (SIP) in chronic stroke patients as an example. Forty chronic stroke patients were interviewed twice by the same examiner, with a 1-week interval. All patients were interviewed during the qualification period preceding a randomized clinical trial. Test-retest reproducibility has been quantified by the intraclass correlation coefficient (ICC). the standard error of measurement (SEM) and the related smallest real difference (SRD). Responsiveness was defined as the ratio of the clinically relevant change to the SD of the within-stable-subject test-retest differences. The ICC for the total SIP was 0.92, whereas the ICCs for the specified SIP categories varied from 0.63 for the category 'recreation and pastime' to 0.88 for the category 'work'. However, both the SEM and the SRD far more capture the essence of the reproducibility of a measurement instrument. For instance, a total SIP score of an individual patient of 28.3% (which is taken as an example, being the mean score in the study population) should decrease by at least 9.26% or approximately 13 items, before any improvement beyond reproducibility noise can be detected. The responsiveness to change of a health status measurement instrument is closely related to its test-retest reproducibility. This relationship becomes more evident when the SEM and the SRD are used to quantify reproducibility, than when ICC or other correlation coefficients are used.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              How to assess the reliability of measurements in rehabilitation.

              To evaluate the effects of rehabilitation interventions, we need reliable measurements. The measurements should also be sufficiently sensitive to enable the detection of clinically important changes. In recent years, the assessment of reliability in clinical practice and medical research has developed from the use of correlation coefficients to a comprehensive set of statistical methods. In this review, we present methods that can be used to assess reliability and describe how data from reliability analyses can aid the interpretation of results from rehabilitation interventions.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, USA )
                1932-6203
                2013
                9 September 2013
                : 8
                : 9
                : e73990
                Affiliations
                [1 ]School of Occupational Therapy and Social Work, Centre for Research into Disability and Society, Curtin University, Perth, Western Australia, Australia
                [2 ]School of Occupational Therapy and Social Work, Curtin Health Innovation Research Institute, Curtin University, Perth, Western Australia, Australia
                [3 ]School of Occupational Therapy, La Trobe University, Melbourne, Vic. Australia
                [4 ]Rehabilitation Medicine, Department of Medicine and Health Sciences (IMH), Faculty of Health Sciences, Linköping University & Pain and Rehabilitation Centre, UHL, County Council, Linköping, Sweden
                [5 ]Department of Community Health and Epidemiology, Dalhousie University, Halifax, Nova Scotia, Canada
                RAND Corporation, United States of America
                Author notes

                Competing Interests: The authors have declared that no competing interests exist.

                Conceived and designed the experiments: SV RP AEP. Performed the experiments: SV. Analyzed the data: SV RP PA. Contributed reagents/materials/analysis tools: SV AEP. Wrote the manuscript: SV RP TF AEP PA. Critically reviewed submission: TF RP.

                Article
                PONE-D-13-13350
                10.1371/journal.pone.0073990
                3767825
                24040139
                8a9fbb0f-a344-42d0-a1fd-2a81567a7c9f
                Copyright @ 2013

                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 29 March 2013
                : 24 July 2013
                Funding
                This project was funded by the first author's Doctoral scholarship provided by the Centre for Research into Disability and Society and the School of Occupational Therapy and Social Work, Curtin University, Perth, Australia. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Research Article

                Uncategorized
                Uncategorized

                Comments

                Comment on this article