16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Validity evidence for programmatic assessment in competency-based education

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction

          Competency-based education (CBE) is now pervasive in health professions education. A foundational principle of CBE is to assess and identify the progression of competency development in students over time. It has been argued that a programmatic approach to assessment in CBE maximizes student learning. The aim of this study is to investigate if programmatic assessment, i. e., a system of assessment, can be used within a CBE framework to track progression of student learning within and across competencies over time.

          Methods

          Three workplace-based assessment methods were used to measure the same seven competency domains. We performed a retrospective quantitative analysis of 327,974 assessment data points from 16,575 completed assessment forms from 962 students over 124 weeks using both descriptive (visualization) and modelling (inferential) analyses. This included multilevel random coefficient modelling and generalizability theory.

          Results

          Random coefficient modelling indicated that variance due to differences in inter-student performance was highest (40%). The reliability coefficients of scores from assessment methods ranged from 0.86 to 0.90. Method and competency variance components were in the small-to-moderate range.

          Discussion

          The current validation evidence provides cause for optimism regarding the explicit development and implementation of a program of assessment within CBE. The majority of the variance in scores appears to be student-related and reliable, supporting the psychometric properties as well as both formative and summative score applications.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: not found
          • Article: not found

          Validating the Interpretations and Uses of Test Scores

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Assessing professional competence: from methods to programmes.

            We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment. Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence. When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective. Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Curriculum development for the workplace using Entrustable Professional Activities (EPAs): AMEE Guide No. 99.

              This Guide was written to support educators interested in building a competency-based workplace curriculum. It aims to provide an up-to-date overview of the literature on Entrustable Professional Activities (EPAs), supplemented with suggestions for practical application to curriculum construction, assessment and educational technology. The Guide first introduces concepts and definitions related to EPAs and then guidance for their identification, elaboration and validation, while clarifying common misunderstandings about EPAs. A matrix-mapping approach of combining EPAs with competencies is discussed, and related to existing concepts such as competency milestones. A specific section is devoted to entrustment decision-making as an inextricable part of working with EPAs. In using EPAs, assessment in the workplace is translated to entrustment decision-making for designated levels of permitted autonomy, ranging from acting under full supervision to providing supervision to a junior learner. A final section is devoted to the use of technology, including mobile devices and electronic portfolios to support feedback to trainees about their progress and to support entrustment decision-making by programme directors or clinical teams.
                Bookmark

                Author and article information

                Contributors
                G.J.Bok@uu.nl
                Journal
                Perspect Med Educ
                Perspect Med Educ
                Perspectives on Medical Education
                Bohn Stafleu van Loghum (Houten )
                2212-2761
                2212-277X
                14 November 2018
                14 November 2018
                December 2018
                : 7
                : 6
                : 362-372
                Affiliations
                [1 ]ISNI 0000000120346234, GRID grid.5477.1, Centre for Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, , Utrecht University, ; Utrecht, The Netherlands
                [2 ]ISNI 0000 0004 1936 7697, GRID grid.22072.35, Department of Psychology, , University of Calgary, ; Calgary, Canada
                [3 ]ISNI 0000 0004 1936 7697, GRID grid.22072.35, Veterinary Clinical and Diagnostic Sciences, Faculty of Veterinary Medicine, , University of Calgary, ; Calgary, Canada
                Article
                481
                10.1007/s40037-018-0481-2
                6283777
                30430439
                93d64c75-e13f-4ef4-ba77-0356ce3aa20b
                © The Author(s) 2018

                Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

                History
                Categories
                Original Article
                Custom metadata
                © The Author(s) 2018

                Education
                outcome-based education,competency development,programmatic assessment,learning curves,performance-relevant information

                Comments

                Comment on this article