9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      The Hidden Value of Narrative Comments for Assessment : A Quantitative Reliability Analysis of Qualitative Data

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In-training evaluation reports (ITERs) are ubiquitous in internal medicine (IM) residency. Written comments can provide a rich data source, yet are often overlooked. This study determined the reliability of using variable amounts of commentary to discriminate between residents.

          Related collections

          Most cited references25

          • Record: found
          • Abstract: found
          • Article: not found

          Assessing professional competence: from methods to programmes.

          We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment. Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence. When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective. Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Competency-based medical education in postgraduate medical education.

            With the introduction of Tomorrow's Doctors in 1993, medical education began the transition from a time- and process-based system to a competency-based training framework. Implementing competency-based training in postgraduate medical education poses many challenges but ultimately requires a demonstration that the learner is truly competent to progress in training or to the next phase of a professional career. Making this transition requires change at virtually all levels of postgraduate training. Key components of this change include the development of valid and reliable assessment tools such as work-based assessment using direct observation, frequent formative feedback, and learner self-directed assessment; active involvement of the learner in the educational process; and intensive faculty development that addresses curricular design and the assessment of competency.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Assessment in the post-psychometric era: learning to love the subjective and collective.

              Since the 1970s, assessment of competence in the health professions has been dominated by a discourse of psychometrics that emphasizes the conversion of human behaviors to numbers and prioritizes high-stakes, point-in-time sampling, and standardization. There are many advantages to this approach, including increased fairness to test takers; however, some limitations of overemphasis on this paradigm are evident. Further, two shifts are underway that have significant consequences for assessment. First, as clinical practice becomes more interprofessional and team-based, the locus of competence is shifting from individuals to teams. Second, expensive, high-stakes final examinations are not well suited for longitudinal assessment in workplaces. The result is a need to consider assessment methods that are subjective and collective.
                Bookmark

                Author and article information

                Journal
                Academic Medicine
                Academic Medicine
                Ovid Technologies (Wolters Kluwer Health)
                1040-2446
                2017
                November 2017
                : 92
                : 11
                : 1617-1621
                Article
                10.1097/ACM.0000000000001669
                28403004
                39e6ff1c-9d80-4523-bcbe-d13f870adcf2
                © 2017
                History

                Comments

                Comment on this article