11
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      To submit your manuscript to JMIR, please click here

      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Assessment of Diagnostic Competences With Standardized Patients Versus Virtual Patients: Experimental Study in the Context of History Taking

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Standardized patients (SPs) have been one of the popular assessment methods in clinical teaching for decades, although they are resource intensive. Nowadays, simulated virtual patients (VPs) are increasingly used because they are permanently available and fully scalable to a large audience. However, empirical studies comparing the differential effects of these assessment methods are lacking. Similarly, the relationships between key variables associated with diagnostic competences (ie, diagnostic accuracy and evidence generation) in these assessment methods still require further research.

          Objective

          The aim of this study is to compare perceived authenticity, cognitive load, and diagnostic competences in performance-based assessment using SPs and VPs. This study also aims to examine the relationships of perceived authenticity, cognitive load, and quality of evidence generation with diagnostic accuracy.

          Methods

          We conducted an experimental study with 86 medical students (mean 26.03 years, SD 4.71) focusing on history taking in dyspnea cases. Participants solved three cases with SPs and three cases with VPs in this repeated measures study. After each case, students provided a diagnosis and rated perceived authenticity and cognitive load. The provided diagnosis was scored in terms of diagnostic accuracy; the questions asked by the medical students were rated with respect to their quality of evidence generation. In addition to regular null hypothesis testing, this study used equivalence testing to investigate the absence of meaningful effects.

          Results

          Perceived authenticity (1-tailed t 81=11.12; P<.001) was higher for SPs than for VPs. The correlation between diagnostic accuracy and perceived authenticity was very small ( r=0.05) and neither equivalent ( P=.09) nor statistically significant ( P=.32). Cognitive load was equivalent in both assessment methods ( t 82=2.81; P=.003). Intrinsic cognitive load (1-tailed r=−0.30; P=.003) and extraneous load (1-tailed r=−0.29; P=.003) correlated negatively with the combined score for diagnostic accuracy. The quality of evidence generation was positively related to diagnostic accuracy for VPs (1-tailed r=0.38; P<.001); this finding did not hold for SPs (1-tailed r=0.05; P=.32). Comparing both assessment methods with each other, diagnostic accuracy was higher for SPs than for VPs (2-tailed t 85=2.49; P=.01).

          Conclusions

          The results on perceived authenticity demonstrate that learners experience SPs as more authentic than VPs. As higher amounts of intrinsic and extraneous cognitive loads are detrimental to performance, both types of cognitive load must be monitored and manipulated systematically in the assessment. Diagnostic accuracy was higher for SPs than for VPs, which could potentially negatively affect students’ grades with VPs. We identify and discuss possible reasons for this performance difference between both assessment methods.

          Related collections

          Most cited references51

          • Record: found
          • Abstract: not found
          • Article: not found

          Book review

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            The assessment of clinical skills/competence/performance

            G E Miller (1990)
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Equivalence Tests

              Scientists should be able to provide support for the absence of a meaningful effect. Currently, researchers often incorrectly conclude an effect is absent based a nonsignificant result. A widely recommended approach within a frequentist framework is to test for equivalence. In equivalence tests, such as the two one-sided tests (TOST) procedure discussed in this article, an upper and lower equivalence bound is specified based on the smallest effect size of interest. The TOST procedure can be used to statistically reject the presence of effects large enough to be considered worthwhile. This practical primer with accompanying spreadsheet and R package enables psychologists to easily perform equivalence tests (and power analyses) by setting equivalence bounds based on standardized effect sizes and provides recommendations to prespecify equivalence bounds. Extending your statistical tool kit with equivalence tests is an easy way to improve your statistical and theoretical inferences.
                Bookmark

                Author and article information

                Contributors
                Journal
                J Med Internet Res
                J Med Internet Res
                JMIR
                Journal of Medical Internet Research
                JMIR Publications (Toronto, Canada )
                1439-4456
                1438-8871
                March 2021
                4 March 2021
                : 23
                : 3
                : e21196
                Affiliations
                [1 ] Institute for Medical Education University Hospital, LMU Munich Munich Germany
                [2 ] Department of Psychology Ludwig-Maximilians-Universität München Munich Germany
                [3 ] Munich Center of the Learning Sciences Ludwig-Maximilians-Universität München Munich Germany
                Author notes
                Corresponding Author: Maximilian C Fink maximilian.fink@ 123456yahoo.com
                Author information
                https://orcid.org/0000-0002-4269-4157
                https://orcid.org/0000-0003-4504-9062
                https://orcid.org/0000-0001-8241-8723
                https://orcid.org/0000-0001-5290-5344
                https://orcid.org/0000-0003-0253-659X
                https://orcid.org/0000-0002-5299-5025
                Article
                v23i3e21196
                10.2196/21196
                7974754
                33661122
                c3d23b39-b708-4675-a4d0-44ffdb3d08c8
                ©Maximilian C Fink, Victoria Reitmeier, Matthias Stadler, Matthias Siebeck, Frank Fischer, Martin R Fischer. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.03.2021.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.

                History
                : 25 June 2020
                : 8 August 2020
                : 1 October 2020
                : 27 December 2020
                Categories
                Original Paper
                Original Paper

                Medicine
                clinical reasoning,medical education,performance-based assessment,simulation,standardized patient,virtual patient

                Comments

                Comment on this article