21
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Assessment of clinical performance during simulated crises using both technical and behavioral ratings.

      Anesthesiology
      Anesthesiology, education, Clinical Competence, standards, Computer Simulation, Education, Medical, methods, Humans, Risk Management

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Techniques are needed to assess anesthesiologists' performance when responding to critical events. Patient simulators allow presentation of similar crisis situations to different clinicians. This study evaluated ratings of performance, and the interrater variability of the ratings, made by multiple independent observers viewing videotapes of simulated crises. Raters scored the videotapes of 14 different teams that were managing two scenarios: malignant hyperthermia (MH) and cardiac arrest. Technical performance and crisis management behaviors were rated. Technical ratings could range from 0.0 to 1.0 based on scenario-specific checklists of appropriate actions. Ratings of 12 crisis management behaviors were made using a five-point ordinal scale. Several statistical assessments of interrater variability were applied. Technical ratings were high for most teams in both scenarios (0.78 +/- 0.08 for MH, 0.83 +/- 0.06 for cardiac arrest). Ratings of crisis management behavior varied, with some teams rated as minimally acceptable or poor (28% for MH, 14% for cardiac arrest). The agreement between raters was fair to excellent, depending on the item rated and the statistical test used. Both technical and behavioral performance can be assessed from videotapes of simulations. The behavioral rating system can be improved; one particular difficulty was aggregating a single rating for a behavior that fluctuated over time. These performance assessment tools might be useful for educational research or for tracking a resident's progress. The rating system needs more refinement before it can be used to assess clinical competence for residency graduation or board certification.

          Related collections

          Most cited references17

          • Record: found
          • Abstract: found
          • Article: not found

          Anesthesia crisis resource management: real-life simulation training in operating room crises.

          Little formal training is provided in anesthesiology residency programs to help acquire, develop, and practice skills in resource management and decision making during crises in practice. Using anesthesia crisis resource management (ACRM) principles developed at another institution, 68 anesthesiologists and 4 nurse-anesthetists participated in an ACRM training course held over a 2 and a half-month period. The anesthesia environment was recreated in a real operating room, with standard equipment and simulations requiring actual performance of clinical interventions. Scenarios included overdose of inhalation anesthetic, oxygen source failure, cardiac arrest, malignant hyperthermia, tension pneumothorax, and complete power failure. A detailed questionnaire was administered following the debriefing and completed by all participants, documenting their immediate impressions. Participants rated themselves as having performed well in the simulator. Senior attendings and residents rated themselves more highly than did their junior counterparts. The potential benefit of this course for anesthesiologists to practice anesthesia more safely in a controlled exercise environment, was rated highly by both groups. Over one half of respondents in all categories felt that the course should be taken once every 12 months; another third of each group felt that the course should be taken once every 24 months. While no senior attendings believed that the course should be taken once every 6 months, approximately 10% of respondents in other categories that it should. Of respondents in the senior and junior attending category, 5% felt the course should never be taken. Although attendings were less favorable than residents in their rating of the value of the course, both groups were still enthusiastic.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            A Comprehensive Anesthesia Simulation Environment

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Measuring interrater reliability among multiple raters: an example of methods for nominal data.

              This paper reviews and critiques various approaches to the measurement of reliability among multiple raters in the case of nominal data. We consider measurement of the overall reliability of a group of raters (using kappa-like statistics) as well as the reliability of individual raters with respect to a group. We introduce modifications of previously published estimators appropriate for measurement of reliability in the case of stratified sampling frames and we interpret these measures in view of standard errors computed using the jackknife. Analyses of a set of 48 anaesthesia case histories in which 42 anaesthesiologists independently rated the appropriateness of care on a nominal scale serve as an example.
                Bookmark

                Author and article information

                Journal
                9667288
                10.1097/00000542-199807000-00005

                Chemistry
                Anesthesiology,education,Clinical Competence,standards,Computer Simulation,Education, Medical,methods,Humans,Risk Management

                Comments

                Comment on this article