2
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      To submit to the journal, click here

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Statistical Power in Content Analysis Designs: How Effect Size, Sample Size and Coding Accuracy Jointly Affect Hypothesis Testing – A Monte Carlo Simulation Approach.

      research-article

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This study uses Monte Carlo simulation techniques to estimate the minimum required levels of intercoder reliability in content analysis data for testing correlational hypotheses, depending on sample size, effect size and coder behavior under uncertainty. The ensuing procedure is analogous to power calculations for experimental designs. In most widespread sample size/effect size settings, the rule-of-thumb that chance-adjusted agreement should be ≥.80 or ≥.667 corresponds to the simulation results, resulting in acceptable α and β error rates. However, this simulation allows making precise power calculations that can consider the specifics of each study’s context, moving beyond one-size-fits-all recommendations. Studies with low sample sizes and/or low expected effect sizes may need coder agreement above .800 to test a hypothesis with sufficient statistical power. In studies with high sample sizes and/or high expected effect sizes, coder agreement below .667 may suffice. Such calculations can help in both evaluating and in designing studies. Particularly in pre-registered research, higher sample sizes may be used to compensate for low expected effect sizes and/or borderline coding reliability (e.g. when constructs are hard to measure). I supply equations, easy-to-use tables and R functions to facilitate use of this framework, along with example code as online appendix.

          Related collections

          Most cited references55

          • Record: found
          • Abstract: not found
          • Article: not found

          Reliability in Content Analysis: Some Common Misconceptions and Recommendations

            Bookmark
            • Record: found
            • Abstract: not found
            • Book: not found

            R: A language and environment for statistical computing

            (2025)
              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              R: A Language and Environment for Statistical Computing

              (2025)
                Bookmark

                Author and article information

                Journal
                CCR
                Computational Communication Research
                Amsterdam University Press
                2665-9085
                2665-9085
                01 March 2021
                : 3
                : 1
                : 61-89
                Article
                CCR2021.1.003.GEIS
                10.5117/CCR2021.1.003.GEIS
                4880058c-caa3-45a4-8dd4-0d8d9a8aca84
                © 2021 Amsterdam University Press

                This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

                History
                Page count
                Pages: 29
                Categories
                Original Articles

                Power analysis,Hypothesis testing,Content analysis,Monte Carlo simulation;,Intercoder agreement,Intercoder reliability,Effect size,Sample size

                Comments

                Comment on this article