50
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      In-training assessment using direct observation of single-patient encounters: a literature review

      review-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We reviewed the literature on instruments for work-based assessment in single clinical encounters, such as the mini-clinical evaluation exercise (mini-CEX), and examined differences between these instruments in characteristics and feasibility, reliability, validity and educational effect. A PubMed search of the literature published before 8 January 2009 yielded 39 articles dealing with 18 different assessment instruments. One researcher extracted data on the characteristics of the instruments and two researchers extracted data on feasibility, reliability, validity and educational effect. Instruments are predominantly formative. Feasibility is generally deemed good and assessor training occurs sparsely but is considered crucial for successful implementation. Acceptable reliability can be achieved with 10 encounters. The validity of many instruments is not investigated, but the validity of the mini-CEX and the ‘clinical evaluation exercise’ is supported by strong and significant correlations with other valid assessment instruments. The evidence from the few studies on educational effects is not very convincing. The reports on clinical assessment instruments for single work-based encounters are generally positive, but supporting evidence is sparse. Feasibility of instruments seems to be good and reliability requires a minimum of 10 encounters, but no clear conclusions emerge on other aspects. Studies on assessor and learner training and studies examining effects beyond ‘happiness data’ are badly needed.

          Related collections

          Most cited references 41

          • Record: found
          • Abstract: found
          • Article: not found

          Assessing professional competence: from methods to programmes.

          We use a utility model to illustrate that, firstly, selecting an assessment method involves context-dependent compromises, and secondly, that assessment is not a measurement problem but an instructional design problem, comprising educational, implementation and resource aspects. In the model, assessment characteristics are differently weighted depending on the purpose and context of the assessment. Of the characteristics in the model, we focus on reliability, validity and educational impact and argue that they are not inherent qualities of any instrument. Reliability depends not on structuring or standardisation but on sampling. Key issues concerning validity are authenticity and integration of competencies. Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgement. Adequate sampling across judges, instruments and contexts can ensure both validity and reliability. Despite recognition that assessment drives learning, this relationship has been little researched, possibly because of its strong context dependence. When assessment should stimulate learning and requires adequate sampling, in authentic contexts, of the performance of complex competencies that cannot be broken down into simple parts, we need to make a shift from individual methods to an integral programme, intertwined with the education programme. Therefore, we need an instructional design perspective. Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Assessment of clinical competence.

            Tests of clinical competence, which allow decisions to be made about medical qualification and fitness to practise, must be designed with respect to key issues including blueprinting, validity, reliability, and standard setting, as well as clarity about their formative or summative function. Multiple choice questions, essays, and oral examinations could be used to test factual recall and applied knowledge, but more sophisticated methods are needed to assess clincial performance, including directly observed long and short cases, objective structured clinical examinations, and the use of standardised patients. The goal of assessment in medical education remains the development of reliable measurements of student performance which, as well as having predictive value for subsequent clinical competence, also have a formative, educational role.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The mini-CEX: a method for assessing clinical skills.

              To evaluate the mini-clinical evaluation exercise (mini-CEX), which assesses the clinical skills of residents. Observational study and psychometric assessment of the mini-CEX. 21 internal medicine training programs. Data from 1228 mini-CEX encounters involving 421 residents and 316 evaluators. The encounters were assessed for the type of visit, sex and complexity of the patient, when the encounter occurred, length of the encounter, ratings provided, and the satisfaction of the examiners. Using this information, we determined the overall average ratings for residents in all categories, the reliability of the mini-CEX scores, and the effects of the characteristics of the patients and encounters. Interviewing skills, physical examination, professionalism, clinical judgment, counseling, organization and efficiency, and overall competence were evaluated. Residents were assessed in various clinical settings with a diverse set of patient problems. Residents received the lowest ratings in the physical examination and the highest ratings in professionalism. Comparisons over the first year of training showed statistically significant improvement in all aspects of competence, and the method generated reliable ratings. The measurement characteristics of the mini-CEX are similar to those of other performance assessments, such as standardized patients. Unlike these assessments, the difficulty of the examination will vary with the patients that a resident encounters. This effect is mitigated to a degree by the examiners, who slightly overcompensate for patient difficulty, and by the fact that each resident interacts with several patients. Furthermore, the mini-CEX has higher fidelity than these formats, permits evaluation based on a much broader set of clinical settings and patient problems, and is administered on site.
                Bookmark

                Author and article information

                Contributors
                +31-243610291 , +31-243619553 , e.pelgrim@elg.umcn.nl
                Journal
                Adv Health Sci Educ Theory Pract
                Advances in Health Sciences Education
                Springer Netherlands (Dordrecht )
                1382-4996
                1573-1677
                18 June 2010
                18 June 2010
                March 2011
                : 16
                : 1
                : 131-142
                Affiliations
                [1 ]Department of Primary Care and Community Care, Radboud University Nijmegen Medical Centre, Postbus 9101, Huispostnummer, 6500 HB Nijmegen, The Netherlands
                [2 ]Maastricht University, Maastricht, The Netherlands
                Article
                9235
                10.1007/s10459-010-9235-6
                3074070
                20559868
                0bd92038-acab-43ce-a14b-589ff318f338
                © The Author(s) 2010
                Categories
                Review
                Custom metadata
                © Springer Science+Business Media B.V. 2011

                Comments

                Comment on this article