18
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      Submit your digital health research with an established publisher
      - celebrating 25 years of open access

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Medical Student Evaluation With a Serious Game Compared to Multiple Choice Questions Assessment

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          The gold standard for evaluating medical students’ knowledge is by multiple choice question (MCQs) tests: an objective and effective means of restituting book-based knowledge. However, concerns have been raised regarding their effectiveness to evaluate global medical skills. Furthermore, MCQs of unequal difficulty can generate frustration and may also lead to a sizable proportion of close results with low score variability. Serious games (SG) have recently been introduced to better evaluate students’ medical skills.

          Objectives

          The study aimed to compare MCQs with SG for medical student evaluation.

          Methods

          We designed a cross-over randomized study including volunteer medical students from two medical schools in Paris (France) from January to September 2016. The students were randomized into two groups and evaluated either by the SG first and then the MCQs, or vice-versa, for a cardiology clinical case. The primary endpoint was score variability evaluated by variance comparison. Secondary endpoints were differences in and correlation between the MCQ and SG results, and student satisfaction.

          Results

          A total of 68 medical students were included. The score variability was significantly higher in the SG group (σ 2 =265.4) than the MCQs group (σ 2=140.2; P=.009). The mean score was significantly lower for the SG than the MCQs at 66.1 (SD 16.3) and 75.7 (SD 11.8) points out of 100, respectively ( P<.001). No correlation was found between the two test results (R 2=0.04, P=.58). The self-reported satisfaction was significantly higher for SG ( P<.001).

          Conclusions

          Our study suggests that SGs are more effective in terms of score variability than MCQs. In addition, they are associated with a higher student satisfaction rate. SGs could represent a new evaluation modality for medical students.

          Related collections

          Most cited references12

          • Record: found
          • Abstract: found
          • Article: not found

          Simulation Technology for Skills Training and Competency Assessment in Medical Education

          Medical education during the past decade has witnessed a significant increase in the use of simulation technology for teaching and assessment. Contributing factors include: changes in health care delivery and academic environments that limit patient availability as educational opportunities; worldwide attention focused on the problem of medical errors and the need to improve patient safety; and the paradigm shift to outcomes-based education with its requirements for assessment and demonstration of competence. The use of simulators addresses many of these issues: they can be readily available at any time and can reproduce a wide variety of clinical conditions on demand. In lieu of the customary (and arguably unethical) system, whereby novices carry out the practice required to master various techniques—including invasive procedures—on real patients, simulation-based education allows trainees to hone their skills in a risk-free environment. Evaluators can also use simulators for reliable assessments of competence in multiple domains. For those readers less familiar with medical simulators, this article aims to provide a brief overview of these educational innovations and their uses; for decision makers in medical education, we hope to broaden awareness of the significant potential of these new technologies for improving physician training and assessment, with a resultant positive impact on patient safety and health care outcomes.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper

            Background Reliable and valid written tests of higher cognitive function are difficult to produce, particularly for the assessment of clinical problem solving. Modified Essay Questions (MEQs) are often used to assess these higher order abilities in preference to other forms of assessment, including multiple-choice questions (MCQs). MEQs often form a vital component of end-of-course assessments in higher education. It is not clear how effectively these questions assess higher order cognitive skills. This study was designed to assess the effectiveness of the MEQ to measure higher-order cognitive skills in an undergraduate institution. Methods An analysis of multiple-choice questions and modified essay questions (MEQs) used for summative assessment in a clinical undergraduate curriculum was undertaken. A total of 50 MCQs and 139 stages of MEQs were examined, which came from three exams run over two years. The effectiveness of the questions was determined by two assessors and was defined by the questions ability to measure higher cognitive skills, as determined by a modification of Bloom's taxonomy, and its quality as determined by the presence of item writing flaws. Results Over 50% of all of the MEQs tested factual recall. This was similar to the percentage of MCQs testing factual recall. The modified essay question failed in its role of consistently assessing higher cognitive skills whereas the MCQ frequently tested more than mere recall of knowledge. Conclusion Construction of MEQs, which will assess higher order cognitive skills cannot be assumed to be a simple task. Well-constructed MCQs should be considered a satisfactory replacement for MEQs if the MEQs cannot be designed to adequately test higher order skills. Such MCQs are capable of withstanding the intellectual and statistical scrutiny imposed by a high stakes exit examination.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Simulation-based assessments in health professional education: a systematic review

              Introduction The use of simulation in health professional education has increased rapidly over the past 2 decades. While simulation has predominantly been used to train health professionals and students for a variety of clinically related situations, there is an increasing trend to use simulation as an assessment tool, especially for the development of technical-based skills required during clinical practice. However, there is a lack of evidence about the effectiveness of using simulation for the assessment of competency. Therefore, the aim of this systematic review was to examine simulation as an assessment tool of technical skills across health professional education. Methods A systematic review of Cumulative Index to Nursing and Allied Health Literature (CINAHL), Education Resources Information Center (ERIC), Medical Literature Analysis and Retrieval System Online (Medline), and Web of Science databases was used to identify research studies published in English between 2000 and 2015 reporting on measures of validity, reliability, or feasibility of simulation as an assessment tool. The McMasters Critical Review for quantitative studies was used to determine methodological value on all full-text reviewed articles. Simulation techniques using human patient simulators, standardized patients, task trainers, and virtual reality were included. Results A total of 1,064 articles were identified using search criteria, and 67 full-text articles were screened for eligibility. Twenty-one articles were included in the final review. The findings indicated that simulation was more robust when used as an assessment in combination with other assessment tools and when more than one simulation scenario was used. Limitations of the research papers included small participant numbers, poor methodological quality, and predominance of studies from medicine, which preclude any definite conclusions. Conclusion Simulation has now been embedded across a range of health professional education and it appears that simulation-based assessments can be used effectively. However, the effectiveness as a stand-alone assessment tool requires further research.
                Bookmark

                Author and article information

                Contributors
                Journal
                JMIR Serious Games
                JMIR Serious Games
                JSG
                JMIR Serious Games
                JMIR Publications (Toronto, Canada )
                2291-9279
                Apr-Jun 2017
                16 May 2017
                : 5
                : 2
                : e11
                Affiliations
                [1] 1AP-HP, Hôpital Cochin Cardiology ParisFrance
                [2] 2Université Paris Descartes ParisFrance
                [3] 3iLUMENS Department of Simulation University of Sorbonne Paris Cité ParisFrance
                [4] 4Université Paris Diderot ParisFrance
                [5] 5AP-HP, Hôpital Bichat Cardiology Université Paris Diderot ParisFrance
                [6] 6FACT, French Alliance for Cardiovascular Trials, DHU FIRE Cardiology department of Bichat hospital Université Paris Diderot ParisFrance
                [7] 7URC-Est AP-HP ParisFrance
                [8] 8AP-HP, Hôpital Cochin Anesthesiology ParisFrance
                Author notes
                Corresponding Author: Olivier Varenne olivier.varenne@ 123456aphp.fr
                Author information
                http://orcid.org/0000-0002-0902-8347
                http://orcid.org/0000-0003-4069-1128
                http://orcid.org/0000-0001-6922-0093
                http://orcid.org/0000-0002-3429-9675
                http://orcid.org/0000-0003-0950-8604
                http://orcid.org/0000-0002-1883-8632
                http://orcid.org/0000-0001-6651-6868
                http://orcid.org/0000-0002-6877-4008
                http://orcid.org/0000-0003-2626-703X
                http://orcid.org/0000-0002-1308-8860
                Article
                v5i2e11
                10.2196/games.7033
                5449650
                28512082
                af43265d-b31c-4f5d-8320-41e6c30209ac
                ©Julien Adjedj, Gregory Ducrocq, Claire Bouleti, Louise Reinhart, Eleonora Fabbro, Yedid Elbez, Quentin Fischer, Antoine Tesniere, Laurent Feldman, Olivier Varenne. Originally published in JMIR Serious Games (http://games.jmir.org), 16.05.2017.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Serious Games, is properly cited. The complete bibliographic information, a link to the original publication on http://games.jmir.org, as well as this copyright and license information must be included.

                History
                : 22 November 2016
                : 16 December 2016
                : 8 February 2017
                : 27 February 2017
                Categories
                Original Paper
                Original Paper

                serious game,multiple choice questions,medical student,student evaluation

                Comments

                Comment on this article