1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A mechanical rotation chair provides superior diagnostics of benign paroxysmal positional vertigo

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Benign paroxysmal positional vertigo (BPPV) is the most common vestibular disease. Both therapeutic and diagnostic benefits with mechanical rotation chairs (MRCs) for management of BPPV have been reported. No previous studies have compared diagnostics in MRCs to traditional diagnostics on an examination bed.

          Objective

          To investigate the agreement between BPPV diagnostics performed with an MRC and traditional diagnostics on an examination bed. Secondary objectives were to (1) examine if the two test modalities differ in diagnostic properties when diagnosing largely untreated patients referred from general practitioners (uncomplicated BPPV) compared to patients referred from private ENTs (complicated BPPV) and (2) examine whether impaired participant cooperation during Manual Diagnostics (MDs) alters agreement, sensitivity and specificity.

          Method

          Prospective randomized clinical trial in which patients with a case history of BPPV were recruited by referrals from general practitioners, otorhinolaryngologists and other hospital departments in the Northern Region of Denmark. Participants underwent diagnostic examinations twice: once by traditional MDs on an examination bed and once with an MRC. Initial examiner and order of test modality were randomized. Examiners were blinded to each other's findings.

          Results

          When testing the ability to diagnose BPPV, agreement between the two test modalities, was 0.83, Cohen's kappa 0.66. When comparing MD diagnostics to MRC diagnostics (set as gold standard diagnostics following test result interpretation), values for MDs were: sensitivity 71%, specificity 98%, Negative Predictive Value 73%, and Positive Predictive Value 97%. Agreement regarding BPPV subtype classification was found to be 0.71, and Cohen's kappa 0.58. Agreement when isolating the diagnosis to posterior canalolithiasis (p-CAN) was 0.89, Cohen's kappa 0.78.

          Conclusion

          Diagnostics, aided by an MRC, are more sensitive than traditional manual BPPV diagnostics. The overall agreement level between test modalities was found to be weak to moderate. When isolating diagnostics to p-CAN, the level of agreement increased to “moderate-strong.” Results also showed higher agreement between test modalities and a significantly higher negative predictive value for MDs when examining patients referred directly from General Practitioners following no- or a single treatment attempt. The diagnostic properties of MDs improved in patients with a higher degree of cooperation.

          Related collections

          Most cited references33

          • Record: found
          • Abstract: found
          • Article: not found

          Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support.

          Research electronic data capture (REDCap) is a novel workflow methodology and software solution designed for rapid development and deployment of electronic data capture tools to support clinical and translational research. We present: (1) a brief description of the REDCap metadata-driven software toolset; (2) detail concerning the capture and use of study-related metadata from scientific research teams; (3) measures of impact for REDCap; (4) details concerning a consortium network of domestic and international institutions collaborating on the project; and (5) strengths and limitations of the REDCap system. REDCap is currently supporting 286 translational research projects in a growing collaborative network including 27 active partner institutions.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The REDCap consortium: Building an international community of software platform partners

            The Research Electronic Data Capture (REDCap) data management platform was developed in 2004 to address an institutional need at Vanderbilt University, then shared with a limited number of adopting sites beginning in 2006. Given bi-directional benefit in early sharing experiments, we created a broader consortium sharing and support model for any academic, non-profit, or government partner wishing to adopt the software. Our sharing framework and consortium-based support model have evolved over time along with the size of the consortium (currently more than 3200 REDCap partners across 128 countries). While the "REDCap Consortium" model represents only one example of how to build and disseminate a software platform, lessons learned from our approach may assist other research institutions seeking to build and disseminate innovative technologies.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Interrater reliability: the kappa statistic

              The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Neurol
                Front Neurol
                Front. Neurol.
                Frontiers in Neurology
                Frontiers Media S.A.
                1664-2295
                27 January 2023
                2023
                : 14
                : 1040701
                Affiliations
                [1] 1Department of Otorhinolaryngology, Head and Neck Surgery and Audiology, Balance and Dizziness Centre, Aalborg University Hospital , Aalborg, Denmark
                [2] 2Department of Clinical Medicine, Aalborg University , Aalborg, Denmark
                Author notes

                Edited by: Tjasse Bruintjes, Gelre Hospitals, Netherlands

                Reviewed by: Erika Celis-aguilar, Autonomous University of Sinaloa, Mexico; Juan M. Espinosa-Sanchez, Hospital Universitario Virgen de las Nieves, Spain; Nils Guinand, Hôpitaux Universitaires de Genève (HUG), Switzerland

                *Correspondence: Mathias Winther Bech ✉ mathias.b@ 123456rn.dk

                This article was submitted to Neuro-Otology, a section of the journal Frontiers in Neurology

                Article
                10.3389/fneur.2023.1040701
                9911680
                36779048
                1c92f809-3212-4e8e-ae1a-903bb8909ae8
                Copyright © 2023 Bech, Staffe and Hougaard.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 09 September 2022
                : 09 January 2023
                Page count
                Figures: 4, Tables: 6, Equations: 0, References: 33, Pages: 11, Words: 8243
                Categories
                Neurology
                Original Research

                Neurology
                vertigo,benign paroxysmal positional vertigo,mechanical rotation chair,repositioning chair,trv chair,bppv,diagnostics

                Comments

                Comment on this article