7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Improvement Science Takes Advantage of Methods beyond the Randomized Controlled Trial

      discussion

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Improvement Science is a well-established field, with tested and proven methods in agriculture and other industries for over a century. Improvement Science emphasizes rapidly testing new approaches in situ to create evidence about what changes lead to improvement in what contexts. Knowledge gained is then used to create wider or more substantial improvement. Execution of improvement science requires individuals familiar with the discipline in which the work is being performed and experts in the methods of improvement science. Improvement science is relatively new to health care systems, which have previously only considered randomized controlled trials (RCTs) to be the gold-standard method to create new evidence. Although RCTs create evidence about the efficacy of therapies, Improvement Science is focused on creating evidence about how to improve systems, which deliver those therapies. As Improvement Science studies spread, we feel compelled to correct inaccurate messages about their proper methodology and reporting. Mischaracterization of these proven methodologies can sabotage the potential benefits of well-done applications of Improvement Science. A recent editorial in a high-impact journal suggested that Improvement Science studies (aka Quality Improvement or QI) and their reports needed improvement. 1 They indicated that studies “should have results that are generalizable,” should report health rather than process outcomes, and should have contemporaneous control groups. They also recommended randomization and blinding. Unfortunately, while we agree with some of these concepts (even occasional randomization and blinding), we are concerned that an excessively narrow view of appropriate methods for establishing generalizable evidence regarding interventions to improve health care quality, safety, and value was presented. In a commentary published in JAMA a decade ago, Don Berwick masterfully deconstructed the “unhappy tension” between research meant to improve clinical evidence and research meant to improve care processes. 2 More recently, Burke and Shojania 3 note that a strength of Improvement Science is the ability to refine the intervention or the implementation strategy. Supported by that literature and our collective Improvement Science experience, we suggest more appropriate directions for evaluating quality improvement research that were not accepted for consideration of publication in the journal containing the original editorial. One point raised by Grady et al. 1 with which we agree is that generalizable studies are more powerful than those that are not. This is true for both RCTs and Improvement Science using other study designs, as both have their deficiencies when applying to broader contexts. Moreover, we agree that single-group, pre–post designs have multiple threats to validity. A key method by which improvement science generates generalizable knowledge is through testing and retesting of interventions either at multiple scales in 1 context, or in multiple contexts. This may occur through spread to other microsystems or institutions, including study designs such as multiple-interrupted time series and stepped wedge methods (sequential but random introduction of an intervention to clusters of individuals). Properly scaled and implemented Plan-Do-Study-Act (PDSA) cycles can provide learning about the system leading to improvement in process and outcome measures quickly and effectively. RCTs are usually not well suited to answer how multiple interventions might work within complex systems. In fact, RCTs lack generalizability when applied to complex systems. 4–6 When reporting follows the SQUIRE guidelines, discussion of intervention scale and interaction with the environment, specific contexts, and generalizability is provided enabling readers to learn how to apply published studies to their own microsystem or problem. 7 We also agree that studies demonstrating outcome or value improvement provide more useful information than studies reporting process measures alone. Furthermore, well-designed improvement science projects should include measurement of potential adverse outcomes as meaningful balancing measures. However, we strongly disagree with requiring implementation research to adopt methods from randomized controlled designs for publishing in medical journals, specifically concurrent control groups, randomization, and blinding of results. The methods of improvement science are as rigorous as those of RCTs, and are more appropriate to the types of questions being asked in quality improvement work. Using PDSA cycles, improvement science studies allow iterative change to successively improve an intervention. Although this is not always compatible with traditional RCT designs, it is compatible with other rigorous evaluation methods such as Shewhart statistical process control charts. Rigorous randomized designs are appropriate for some questions in some settings, but we object to trying to limit improvement activities to traditional RCTs. For example, if an emergency room is inadequate at providing antibiotics to patients in septic shock quickly, PDSA cycles may best ensure that care becomes adequate rapidly. The knowledge gained during this process is likely useful to other emergency rooms. Moreover, doing this in several emergency rooms and documenting similarities and differences in approach makes the learning more generalizable. The beauty of PDSA cycles and real-time data analysis through control charts are that intervention effects can be seen much more rapidly than with a drawn out RCT with post hoc data analysis. In fact, an RCT in this case could harm the half of patients not exposed to a beneficial intervention that we know works (providing antibiotics for sepsis). For this reason, while concurrent control groups are 1 approach to addressing important threats to validity, demonstrating through repeated measurements that there has been a significant change from baseline pattern is another valid approach. Because improvement science activity aims to improve care using established evidence, and specifically because randomization is not needed to demonstrate improvement, many IRBs realize that this activity is exempt from review, as outlined by the Department of Health and Human Services. 8 In summary, we feel that the literature now contains an erroneous message, which has not taken into account which study designs best answer the questions expected to be answered by an improvement science study. As Berwick 2 stated, “‘Where is the randomized trial?’ is, for many purposes, the right question, but for many others it is the wrong question, a myopic one.” We believe that misunderstanding the goals, scientific basis, and methods of improvement science by some journal editors is precisely why some prestigious journals rarely publish excellent improvement science activity. This bias against publishing the knowledge generated by improvement scientists does a disservice to our health care providers and their patients. DISCLOSURE The authors have no financial interest to declare in relation to the content of this article.

          Related collections

          Most cited references6

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process

          Since the publication of Standards for QUality Improvement Reporting Excellence (SQUIRE 1.0) guidelines in 2008, the science of the field has advanced considerably. In this manuscript, we describe the development of SQUIRE 2.0 and its key components. We undertook the revision between 2012 and 2015 using (1) semistructured interviews and focus groups to evaluate SQUIRE 1.0 plus feedback from an international steering group, (2) two face-to-face consensus meetings to develop interim drafts and (3) pilot testing with authors and a public comment period. SQUIRE 2.0 emphasises the reporting of three key components of systematic efforts to improve the quality, value and safety of healthcare: the use of formal and informal theory in planning, implementing and evaluating improvement work; the context in which the work is done and the study of the intervention(s). SQUIRE 2.0 is intended for reporting the range of methods used to improve healthcare, recognising that they can be complex and multidimensional. It provides common ground to share these discoveries in the scholarly literature (http://www.squire-statement.org).
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Ethical issues in the design and conduct of cluster randomised controlled trials.

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Beyond the randomized clinical trial: the role of effectiveness studies in evaluating cardiovascular therapies.

                Bookmark

                Author and article information

                Journal
                Pediatr Qual Saf
                Pediatr Qual Saf
                PQS
                Pediatric Quality & Safety
                Wolters Kluwer Health
                2472-0054
                8 June 2018
                May-Jun 2018
                : 3
                : 3
                : e082
                Affiliations
                From the [* ]Department of Pediatrics, Nationwide Children’s Hospital and the Ohio State University, Columbus, OH
                []Division of General Medicine and Clinical Epidemiology, UNC School of Medicine, Chapel Hill, NC.
                []Division of Pediatric Hospital Medicine, Vanderbilt University School of Medicine, Nashville, TN
                [§ ]Inpatient Quality and Patient Safety, Monroe Carell Jr Children’s Hospital at Vanderbilt, Nashville, TN
                []Department of Family and Community Medicine, University of Missouri, Columbia, MO
                []Department of Pediatric Emergency Medicine at Sheik Zayed, Children’s National Medical Center, Washington, D.C.
                [** ]Improvement Advisor, Associates in Process Improvement, Austin, TX.
                Author notes
                *Corresponding author. Address: Thomas Bartman, MD, PhD, Nationwide Children’s Hospital and the Ohio State University, 700 Childrens Drive, Columbus, OH 43205, PH: 614-722-2564, Email: Thomas.bartman@ 123456nationwidechildrens.org
                Article
                00005
                10.1097/pq9.0000000000000082
                6132811
                8e2cc4ac-fe37-4e91-9765-a317dca19a5e
                Copyright © 2018 the Author(s). Published by Wolters Kluwer Health, Inc.

                This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

                History
                : 21 March 2018
                : 1 May 2018
                Categories
                Commentary
                Custom metadata
                TRUE

                Comments

                Comment on this article