2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Pilot study: The effectiveness of physiotherapy-led screening for patients requiring an orthopedic intervention

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          LAY SUMMARY

          In Canada, patients can wait over a year to be seen by an orthopedic surgeon. To reduce wait times, physiotherapists have been employed in some practice areas to triage patients prior to being seen by an orthopedic surgeon. This study looked at different forms of triage by using physiotherapists to screen electronic medical records (EMR) to determine if patients needed orthopedic intervention or conservative management. To guide the physiotherapists, a screening tool was created. The study compared the recommendations of the physiotherapists with those of an orthopedic surgeon. The results showed that, most of the time, physiotherapists using the screening tool successfully identified whether a patient needed to see an orthopedic surgeon or could be treated with physiotherapy. This type of screening process may decrease wait times to see an orthopedic surgeon and improve access to physiotherapy or other treatments.

          Abstract

          Introduction: In Canada, wait times for orthopedic surgery represent a significant delay in care for patients with musculoskeletal disorders. To improve access, new models of care involving physiotherapists to either diagnose, triage, and/or conservatively manage patients with musculoskeletal disorders are being implemented. The purpose of this study was to assess the effectiveness of physiotherapy-led screening of electronic medical records (EMR) using a locally developed screening tool to identify whether patients required orthopedic intervention or conservative management. Methods: The EMRs of 41 patients, referred to orthopedic surgery for any musculoskeletal disorder in an outpatient orthopedic clinic within a military primary health care centre in Halifax, Canada, were independently screened by two randomly assigned physiotherapists. The corresponding patients were subsequently seen by one orthopedic surgeon. The physiotherapists screened the EMRs using a screening tool and provided triage recommendations (orthopedic intervention, physiotherapy, physiatry, diagnostic investigations, or other intervention). Percentage of agreement and Fleiss’ kappa were calculated to assess inter-rater agreement, and validity was determined by cross-tabulation. Results: The percentage of agreement for triage recommendations among physiotherapists was 78% and inter-rater agreement was moderate (κ = 0.617; 95% CI, 0.365–0.868, p < 0.001). Excluding recommendations for diagnostic investigations increased the percentage of agreement to 93.9% and resulted in a strong level of inter-rater agreement (κ = 0.878; 95% CI, 0.537–1.219). The screening tool was determined to have 64.0% sensitivity, 87.5% specificity, a positive predictive value of 88.9%, and a negative predictive value of 63.2%. Discussion: EMR screening may have a role in identifying patients that require orthopedic intervention; however, more research is needed.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Interrater reliability: the kappa statistic

          The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Construct validity in psychological tests.

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Sensitivity, Specificity, and Predictive Values: Foundations, Pliabilities, and Pitfalls in Research and Practice

              Within the context of screening tests, it is important to avoid misconceptions about sensitivity, specificity, and predictive values. In this article, therefore, foundations are first established concerning these metrics along with the first of several aspects of pliability that should be recognized in relation to those metrics. Clarification is then provided about the definitions of sensitivity, specificity, and predictive values and why researchers and clinicians can misunderstand and misrepresent them. Arguments are made that sensitivity and specificity should usually be applied only in the context of describing a screening test’s attributes relative to a reference standard; that predictive values are more appropriate and informative in actual screening contexts, but that sensitivity and specificity can be used for screening decisions about individual people if they are extremely high; that predictive values need not always be high and might be used to advantage by adjusting the sensitivity and specificity of screening tests; that, in screening contexts, researchers should provide information about all four metrics and how they were derived; and that, where necessary, consumers of health research should have the skills to interpret those metrics effectively for maximum benefit to clients and the healthcare system.
                Bookmark

                Author and article information

                Journal
                Journal of Military, Veteran and Family Health
                Journal of Military, Veteran and Family Health
                University of Toronto Press Inc. (UTPress)
                2368-7924
                May 01 2021
                May 01 2021
                : 7
                : 2
                : 3-15
                Affiliations
                [1 ] Canadian Forces Health Services, Department of National Defence, Halifax, Nova Scotia, Canada
                [2 ] School of Nursing, Queen’s University, Kingston, Ontario, Canada
                [3 ] Department of Surgery, Dalhousie University, Dalhousie, Nova Scotia, Canada
                Article
                10.3138/jmvfh-2020-0060
                495491bc-732a-41a1-8948-7c7be8754d5a
                © 2021
                History

                Comments

                Comment on this article