21
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The sights and insights of examiners in objective structured clinical examinations

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose

          The objective structured clinical examination (OSCE) is considered to be one of the most robust methods of clinical assessment. One of its strengths lies in its ability to minimise the effects of examiner bias due to the standardisation of items and tasks for each candidate. However, OSCE examiners’ assessment scores are influenced by several factors that may jeopardise the assumed objectivity of OSCEs. To better understand this phenomenon, the current review aims to determine and describe important sources of examiner bias and the factors affecting examiners’ assessments.

          Methods

          We performed a narrative review of the medical literature using Medline. All articles meeting the selection criteria were reviewed, with salient points extracted and synthesised into a clear and comprehensive summary of the knowledge in this area.

          Results

          OSCE examiners’ assessment scores are influenced by factors belonging to 4 different domains: examination context, examinee characteristics, examinee-examiner interactions, and examiner characteristics. These domains are composed of several factors including halo, hawk/dove and OSCE contrast effects; the examiner’s gender and ethnicity; training; lifetime experience in assessing; leadership and familiarity with students; station type; and site effects.

          Conclusion

          Several factors may influence the presumed objectivity of examiners’ assessments, and these factors need to be addressed to ensure the objectivity of OSCEs. We offer insights into directions for future research to better understand and address the phenomenon of examiner bias.

          Related collections

          Most cited references49

          • Record: found
          • Abstract: found
          • Article: not found

          Seeing the 'black box' differently: assessor cognition from three research perspectives.

          Performance assessments, such as workplace-based assessments (WBAs), represent a crucial component of assessment strategy in medical education. Persistent concerns about rater variability in performance assessments have resulted in a new field of study focusing on the cognitive processes used by raters, or more inclusively, by assessors.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Broadening perspectives on clinical performance assessment: rethinking the nature of in-training assessment.

            In-training assessment (ITA), defined as multiple assessments of performance in the setting of day-to-day practice, is an invaluable tool in assessment programmes which aim to assess professional competence in a comprehensive and valid way. Research on clinical performance ratings, however, consistently shows weaknesses concerning accuracy, reliability and validity. Attempts to improve the psychometric characteristics of ITA focusing on standardisation and objectivity of measurement thus far result in limited improvement of ITA-practices. The aim of the paper is to demonstrate that the psychometric framework may limit more meaningful educational approaches to performance assessment, because it does not take into account key issues in the mechanics of the assessment process. Based on insights from other disciplines, we propose an approach to ITA that takes a constructivist, social-psychological perspective and integrates elements of theories of cognition, motivation and decision making. A central assumption in the proposed framework is that performance assessment is a judgment and decision making process, in which rating outcomes are influenced by interactions between individuals and the social context in which assessment occurs. The issues raised in the article and the proposed assessment framework bring forward a number of implications for current performance assessment practice. It is argued that focusing on the context of performance assessment may be more effective in improving ITA practices than focusing strictly on raters and rating instruments. Furthermore, the constructivist approach towards assessment has important implications for assessment procedures as well as the evaluation of assessment quality. Finally, it is argued that further research into performance assessment should contribute towards a better understanding of the factors that influence rating outcomes, such as rater motivation, assessment procedures and other contextual variables.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              General overview of the theories used in assessment: AMEE Guide No. 57.

              There are no scientific theories that are uniquely related to assessment in medical education. There are many theories in adjacent fields, however, that can be informative for assessment in medical education, and in the recent decades they have proven their value. In this AMEE Guide we discuss theories on expertise development and psychometric theories, and the relatively young and emerging framework of assessment for learning. Expertise theories highlight the multistage processes involved. The transition from novice to expert is characterised by an increase in the aggregation of concepts from isolated facts, through semantic networks to illness scripts and instance scripts. The latter two stages enable the expert to recognise the problem quickly and form a quick and accurate representation of the problem in his/her working memory. Striking differences between experts and novices is not per se the possession of more explicit knowledge but the superior organisation of knowledge in his/her brain and pairing it with multiple real experiences, enabling not only better problem solving but also more efficient problem solving. Psychometric theories focus on the validity of the assessment - does it measure what it purports to measure and reliability - are the outcomes of the assessment reproducible. Validity is currently seen as building a train of arguments of how best observations of behaviour (answering a multiple-choice question is also a behaviour) can be translated into scores and how these can be used at the end to make inferences about the construct of interest. Reliability theories can be categorised into classical test theory, generalisability theory and item response theory. All three approaches have specific advantages and disadvantages and different areas of application. Finally in the Guide, we discuss the phenomenon of assessment for learning as opposed to assessment of learning and its implications for current and future development and research.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                J Educ Eval Health Prof
                J Educ Eval Health Prof
                JEEHP
                Journal of Educational Evaluation for Health Professions
                Korea Health Personnel Licensing Examination Institute
                1975-5937
                2017
                27 December 2017
                : 14
                : 34
                Affiliations
                [1 ]Clinical Skills Teaching Unit, Prince of Wales Hospital, Sydney, Australia
                [2 ]Office of Medical Education, University of New South Wales, Sydney, Australia
                [3 ]University of New South Wales, Sydney, Australia
                [4 ]Prince of Wales Clinical School, University of New South Wales, Sydney, Australia
                [5 ]Centre for Medical and Health Sciences Education, University of Auckland, Auckland, New Zealand
                Hallym University, Korea
                Author notes
                [* ]Corresponding email: b.shulruf@ 123456unsw.edu.au
                Author information
                http://orcid.org/0000-0002-1791-1500
                http://orcid.org/0000-0003-1992-8485
                http://orcid.org/0000-0003-3600-7987
                http://orcid.org/0000-0002-7866-665X
                http://orcid.org/0000-0003-3644-727X
                Article
                jeehp-14-34
                10.3352/jeehp.2017.14.34
                5801428
                29278906
                8c03a28a-9feb-4868-9592-8733a2218348
                © 2017, Korea Health Personnel Licensing Examination Institute

                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 18 December 2017
                : 27 December 2017
                Categories
                Research Article

                Assessment, Evaluation & Research methods
                bias,leadership,medline,problem solving,student
                Assessment, Evaluation & Research methods
                bias, leadership, medline, problem solving, student

                Comments

                Comment on this article