Simulation has offered a practical means to train and rehearse clinical skills for
many years. Simulated environments, patients, and related technologies have been used
to develop, validate, and maintain a wide range of clinical skills across numerous
clinical specialties. In the past 30 years, the field has truly thrived, as evidenced
in rapidly evolving simulation technologies; the ever-increasing volume and quality
of simulation-based scientific studies; the institution of numerous peer-reviewed
outlets for the dissemination of these studies; the number of learned societies dedicated
to promoting simulation and their expansive memberships; and the widespread development
and availability of clinical educational resources, curricula, and policies centered
on application of simulation. Such simulation-based training applications and interventions
within the health professions has been termed an “ethical imperative” [1], whereby
demonstrating proficiency on simulation-based tasks and procedures before performing
them in a clinical environment on patients seems to be a trend gaining significant
momentum [2–4].
Clinical simulation science is thus past its early developmental stages. Evidence
reviews and syntheses are taking stock of where the field is, and where it should
be heading. From this perspective of a self-reflective science, the paper by Cheng
et al. [5] on the extension of existing guidelines to encompass the reporting of simulation
research is as valuable as it is timely. Cheng et al. applied an elaborate consensus-building
methodology using panels of international experts in the field. In successive stages,
they reviewed the existing guidelines for applicability to simulation research and
edited them accordingly. In addition to the expert-derived modifications to the guidelines,
simulation-specific items were also developed to account for the unique needs of simulation
science, as new information and methods are generated [6]. These items largely focus
on contextual elements of the study design. They include the type of simulator and
simulation environment used, the ways that study participants were oriented to it
(based on their prior extent of exposure to simulation), the description of the event/scenario
used, the challenges presented to participants, and finally, the feedback/debriefing
(if any) that was conducted. This comprehensive process, and its result have numerous
strengths, including its detailed method (which applied an iterative and reflective
approach to the guideline development), expert coverage, and good response rates by
the consulted experts (95 % of whom provided partial input, and 75 % of whom contributed
fully to the process). These outnumber certain limitations of the study, most of which
typify inherent aspects of any consensus-building methodology, but also include the
scope of the guideline extension, which excluded qualitative and mixed-methods research,
computational simulation studies, as well as validation studies.
These guidelines were urgently needed. Simulation research is certainly progressing
well; however, for the field to achieve maturity, the quality of the reported science
is a paramount focus. As Cheng et al. briefly review, simulation research, no differently
than for other health and medical science work, is often poorly reported and at the
very least lacks consistent reporting across similar studies. This poses multiple
problems. First, poorly reported research presents a dilemma to readers, making it
hard to tell if is a study that was well-conducted but suboptimally described, or
if it was actually poorly conducted (but accurately described). Second, academic reviews
and syntheses of inconsistently or poorly reported studies suffer owing to lack of
homogeneity and lack of comparability. As a result, capturing the state of the field
becomes problematic, slowing forward progress. Third, the reputation of the field
is at risk, as inconsistency in reporting style and outcomes creates confusion among
wider audiences such as clinical leaders or policy makers.
This issue is not peculiar to simulation. The wider problem of how to increase the
value of biomedical, clinical, and health research was the subject of a high-profile
2014 series of articles in the Lancet [7–11]. Poor reporting of research findings
is part of this wider problem. It causes repercussions in (poorly informed) planning,
selection, and funding decisions about research. It undermines efforts to make sound
clinical and educational policy decisions based on extant research. Accurate, systematic,
and unbiased reporting should be part of the wider effort to deliver value through
novel research. The guidelines offered by Cheng et al. [5] have the potential, if
implemented widely, to help address these problems in the simulation community by
uniformly improving the quality and consistency of simulation study reports.
As with any guidelines, however, their envisioned positive impact will only materialize
if they are suitably implemented. From our collective perspective as editors of peer-reviewed
journals within the field, we endorse them for use within the journals we represent.
We pledge to encourage our author colleagues to use the guidelines whenever appropriate
when crafting the studies and writing the manuscripts that they submit to us. We are
optimistic that this will happen. Over many years, our field has shown exceptional
innovation and commitment to high-quality science. We have witnessed dedication to
simulation research and its application by many in our still young field, and we do
expect that a move toward standardization of research conduct and reporting will be
welcomed and used in practice.
Looking to the future, we suggest that as the field matures further, the direction
of research design and reporting of a variety of study types could integrate good
practices from nonsimulation research paradigms. This may be especially true when
evaluating complex health care interventions that use simulation approaches in whole
or in part. Within the applied health research field, guidelines detail how to study
and subsequently report the process via which an intervention was delivered to its
intended recipients (e.g. clinical services and their users) [12]. Such detailed “process
evaluations”, typical examples of what has been termed “implementation science” [13],
allow one to appraise the effectiveness of the intervention’s implementation, the
nature of the implementation context, and finally, the mechanisms by which the intervention
affected patient outcomes. The novel elements that Cheng et al. [5] have developed
that apply specifically to the reporting of simulation research fall within these
concepts of intervention implementation and context, as well as enhance accurate and
meaningful interpretation of the results of simulation studies. This means that some
simulation studies might be good candidates for reporting detailed process evaluations,
so that readers have the ability to fully appreciate the educational and/or clinical
context, as well as the delivery of the simulation-based intervention. Members of
our community should reflect further on these and other elements, with a view to improving
not only how simulation studies are reported, but also how they are carried out.