5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      2018 Consensus framework for good assessment

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction: In 2010, the Ottawa Conference produced a set of consensus criteria for good assessment. These were well received and since then the working group monitored their use. As part of the 2010 report, it was recommended that consideration be given in the future to preparing similar criteria for systems of assessment. Recent developments in the field suggest that it would be timely to undertake that task and so the working group was reconvened, with changes in membership to reflect broad global representation.Methods: Consideration was given to whether the initially proposed criteria continued to be appropriate for single assessments and the group believed that they were. Consequently, we reiterate the criteria that apply to individual assessments and duplicate relevant portions of the 2010 report.Results and discussion: This paper also presents a new set of criteria that apply to systems of assessment and, recognizing the challenges of implementation, offers several issues for further consideration. Among these issues are the increasing diversity of candidates and programs, the importance of legal defensibility in high stakes assessments, globalization and the interest in portable recognition of medical training, and the interest among employers and patients in how medical education is delivered and how progression decisions are made.

          Related collections

          Most cited references7

          • Record: found
          • Abstract: found
          • Article: found

          Assessment for selection for the health care professions and specialty training: consensus statement and recommendations from the Ottawa 2010 Conference.

          Assessment for selection in medicine and the health professions should follow the same quality assurance processes as in-course assessment. The literature on selection is limited and is not strongly theoretical or conceptual. For written testing, there is evidence of the predictive validity of Medical College Admission Test (MCAT) for medical school and licensing examination performance. There is also evidence for the predictive validity of grade point average, particularly in combination with MCAT for graduate entry but little evidence about the predictive validity of school leaver scores. Interviews have not been shown to be robust selection measures. Studies of multiple mini-interviews have indicated good predictive validity and reliability. Of other measures used in selection, only the growing interest in personality testing appears to warrant future work. Widening access to medical and health professional programmes is an increasing priority and relates to the social accountability mandate of medical and health professional schools. While traditional selection measures do discriminate against various population groups, there is little evidence on the effect of non-traditional measures in widening access. Preparation and outreach programmes show most promise. In summary, the areas of consensus for assessment for selection are small in number. Recommendations for future action focus on the adoption of principles of good assessment and curriculum alignment, use of multi-method programmatic approaches, development of interdisciplinary frameworks and utilisation of sophisticated measurement models. The social accountability mandate of medical and health professional schools demands that social inclusion, workforce issues and widening of access are embedded in the principles of good assessment for selection.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Analytic global OSCE ratings are sensitive to level of training.

            There are several reasons for using global ratings in addition to checklists for scoring objective structured clinical examination (OSCE) stations. However, there has been little evidence collected regarding the validity of these scales. This study assessed the construct validity of an analytic global rating with 4 component subscales: empathy, coherence, verbal and non-verbal expression. A total of 19 Year 3 and 38 Year 4 clinical clerks were scored on content checklists and these global ratings during a 10-station OSCE. T-tests were used to assess differences between groups for overall checklist and global scores, and for each of the 4 subscales. The mean global rating was significantly higher for senior clerks (75.5% versus 71.3%, t55 = 2.12, P < 0.05) and there were significant differences by level of training for the coherence (t55 = 3.33, P < 0.01) and verbal communication (t55 = 2.33, P < 0.05) subscales. Interstation reliability was 0.70 for the global rating and ranged from 0.58 to 0.65 for the subscales. Checklist reliability was 0.54. In this study, a summated analytic global rating demonstrated construct validity, as did 2 of the 4 scales measuring specific traits. In addition, the analytic global rating showed substantially higher internal consistency than did the checklists, a finding consistent with that seen in previous studies cited in the literature. Global ratings are an important element of OSCE measurement and can have good psychometric properties. However, OSCE researchers should clearly describe the type of global ratings they use. Further research is needed to define the most effective global rating scales.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Developing the role of big data and analytics in health professional education.

              As we capture more and more data about learners, their learning, and the organization of their learning, our ability to identify emerging patterns and to extract meaning grows exponentially. The insights gained from the analyses of these large amounts of data are only helpful to the extent that they can be the basis for positive action such as knowledge discovery, improved capacity for prediction, and anomaly detection. Big Data involves the aggregation and melding of large and heterogeneous datasets while education analytics involves looking for patterns in educational practice or performance in single or aggregate datasets. Although it seems likely that the use of education analytics and Big Data techniques will have a transformative impact on health professional education, there is much yet to be done before they can become part of mainstream health professional education practice. If health professional education is to be accountable for its programs run and are developed, then health professional educators will need to be ready to deal with the complex and compelling dynamics of analytics and Big Data. This article provides an overview of these emerging techniques in the context of health professional education.
                Bookmark

                Author and article information

                Journal
                Medical Teacher
                Medical Teacher
                Informa UK Limited
                0142-159X
                1466-187X
                May 25 2018
                November 02 2018
                October 09 2018
                November 02 2018
                : 40
                : 11
                : 1102-1109
                Affiliations
                [1 ] FAIMER, Philadelphia PA, USA;
                [2 ] NBME, Philadelphia PA, USA;
                [3 ] School of Medicine of Ribeirão Preto, Universidade Cidade de Sao Paulo, Ribeirão Preto, Brazil;
                [4 ] Groote Schuur Hospital, University of Cape Town and Groote Schuur, Cape Town, South Africa;
                [5 ] School of Medicine, University of Minho, Braga, Portugal;
                [6 ] Parnassia Psychiatric Institute, Maastricht University, Hague, The Netherlands;
                [7 ] Rural Clinical School, University of Tasmania, Burnie, Australia;
                [8 ] Cumming School of Medicine, University of Calgary, Alberta, Canada;
                [9 ] Medical Education Unit, University of Leeds, Leeds, UK;
                [10 ] ABMS, Chicago, IL, USA
                Article
                10.1080/0142159X.2018.1500016
                30299187
                0d62d265-62c0-464d-bed3-f128b26f3b5c
                © 2018

                Comments

                Comment on this article