3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Global prevalence and factors associated with workplace violence against nursing students: A systematic review, meta-analysis, and meta-regression

      , , , ,
      Aggression and Violent Behavior
      Elsevier BV

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references84

          • Record: found
          • Abstract: not found
          • Article: not found

          Bias in meta-analysis detected by a simple, graphical test

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Interrater reliability: the kappa statistic

            The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              The PRISMA 2020 statement: An updated guideline for reporting systematic reviews

              Matthew Page and co-authors describe PRISMA 2020, an updated reporting guideline for systematic reviews and meta-analyses.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Aggression and Violent Behavior
                Aggression and Violent Behavior
                Elsevier BV
                13591789
                March 2024
                March 2024
                : 75
                : 101907
                Article
                10.1016/j.avb.2023.101907
                41150042-0d52-479d-8ff4-183dd562556a
                © 2024

                https://www.elsevier.com/tdm/userlicense/1.0/

                https://doi.org/10.15223/policy-017

                https://doi.org/10.15223/policy-037

                https://doi.org/10.15223/policy-012

                https://doi.org/10.15223/policy-029

                https://doi.org/10.15223/policy-004

                History

                Comments

                Comment on this article