26
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Validation of educational assessments: a primer for simulation and beyond

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Simulation plays a vital role in health professions assessment. This review provides a primer on assessment validation for educators and education researchers. We focus on simulation-based assessment of health professionals, but the principles apply broadly to other assessment approaches and topics.

          Key principles

          Validation refers to the process of collecting validity evidence to evaluate the appropriateness of the interpretations, uses, and decisions based on assessment results. Contemporary frameworks view validity as a hypothesis, and validity evidence is collected to support or refute the validity hypothesis (i.e., that the proposed interpretations and decisions are defensible). In validation, the educator or researcher defines the proposed interpretations and decisions, identifies and prioritizes the most questionable assumptions in making these interpretations and decisions (the “interpretation-use argument”), empirically tests those assumptions using existing or newly-collected evidence, and then summarizes the evidence as a coherent “validity argument.” A framework proposed by Messick identifies potential evidence sources: content, response process, internal structure, relationships with other variables, and consequences. Another framework proposed by Kane identifies key inferences in generating useful interpretations: scoring, generalization, extrapolation, and implications/decision. We propose an eight-step approach to validation that applies to either framework: Define the construct and proposed interpretation, make explicit the intended decision(s), define the interpretation-use argument and prioritize needed validity evidence, identify candidate instruments and/or create/adapt a new instrument, appraise existing evidence and collect new evidence as needed, keep track of practical issues, formulate the validity argument, and make a judgment: does the evidence support the intended use?

          Conclusions

          Rigorous validation first prioritizes and then empirically evaluates key assumptions in the interpretation and use of assessment scores. Validation science would be improved by more explicit articulation and prioritization of the interpretation-use argument, greater use of formal validation frameworks, and more evidence informing the consequences and implications of assessment.

          Related collections

          Most cited references39

          • Record: found
          • Abstract: not found
          • Article: not found

          Validating the Interpretations and Uses of Test Scores

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Surrogate end points in clinical trials: are we being misled?

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Workplace-based assessment as an educational tool: AMEE Guide No. 31.

              There has been concern that trainees are seldom observed, assessed, and given feedback during their workplace-based education. This has led to an increasing interest in a variety of formative assessment methods that require observation and offer the opportunity for feedback. To review some of the literature on the efficacy and prevalence of formative feedback, describe the common formative assessment methods, characterize the nature of feedback, examine the effect of faculty development on its quality, and summarize the challenges still faced. The research literature on formative assessment and feedback suggests that it is a powerful means for changing the behaviour of trainees. Several methods for assessing it have been developed and there is preliminary evidence of their reliability and validity. A variety of factors enhance the efficacy of workplace-based assessment including the provision of feedback that is consistent with the needs of the learner and focused on important aspects of the performance. Faculty plays a critical role and successful implementation requires that they receive training. There is a need for formative assessment which offers trainees the opportunity for feedback. Several good methods exist and feedback has been shown to have a major influence on learning. The critical role of faculty is highlighted, as is the need for strategies to enhance their participation and training.
                Bookmark

                Author and article information

                Contributors
                507-266-4156 , cook.david33@mayo.edu
                rhatala@mac.com
                Journal
                Adv Simul (Lond)
                Adv Simul (Lond)
                Advances in Simulation
                BioMed Central (London )
                2059-0628
                7 December 2016
                7 December 2016
                2016
                : 1
                : 31
                Affiliations
                [1 ]GRID grid.66875.3a, ISNI 000000040459167X, Mayo Clinic Online Learning, , Mayo Clinic College of Medicine, ; Rochester, MN USA
                [2 ]GRID grid.66875.3a, ISNI 000000040459167X, Office of Applied Scholarship and Education Science, , Mayo Clinic College of Medicine, ; Rochester, MN USA
                [3 ]GRID grid.66875.3a, ISNI 000000040459167X, Division of General Internal Medicine, , Mayo Clinic College of Medicine, ; Mayo 17-W, 200 First Street SW, Rochester, MN 55905 USA
                [4 ]GRID grid.17091.3e, ISNI 0000000122889830, Department of Medicine, , University of British Columbia, ; Vancouver, British Columbia Canada
                Article
                33
                10.1186/s41077-016-0033-y
                5806296
                29450000
                2feae04f-bb06-42e7-8189-f145c4c04bb8
                © The Author(s) 2016

                Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

                History
                : 20 July 2016
                : 16 November 2016
                Categories
                Methodology Article
                Custom metadata
                © The Author(s) 2016

                lumbar puncture,validity evidence,validity argument,validation framework,content evidence

                Comments

                Comment on this article