1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Book Chapter: not found
      Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption : 17th European Conference on Technology Enhanced Learning, EC-TEL 2022, Toulouse, France, September 12–16, 2022, Proceedings 

      Assessing the Quality of Student-Generated Short Answer Questions Using GPT-3

      other

      Read this book at

      Buy book Bookmark
          There is no author summary for this book yet. Authors can add summaries to their books on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references27

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Interrater reliability: the kappa statistic

          The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            A Revision of Bloom's Taxonomy: An Overview

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Student-generated questions: A meaningful aspect of learning in science

                Bookmark

                Author and book information

                Contributors
                Book Chapter
                2022
                September 05 2022
                : 243-257
                10.1007/978-3-031-16290-9_18
                6b578ccc-b8f6-4d40-9b91-37f2a1987bc8
                History

                Comments

                Comment on this book

                Book chapters

                Similar content2,965

                Cited by1