How we assembled this supplement
Our interactions during the Colloquium led to an outpouring of creative and innovative
thought about improving healthcare. We re-explored and reframed old ideas, and came
up with new ones. The process reminded many of us of the way Native American elders
were said to go about reaching decisions: ‘Talk and talk until the talk begins.’1
And importantly, we recognised both explicit and implicit invitations to take action,
as we probed the various ways in which people could build on and use such knowledge.
In putting together this supplement, we decided, in general, not to follow the traditional
procedure of publishing the inputs: the scripted material participants prepared beforehand,
and presented in the initial days of the meeting. Instead, with a few exceptions,
it seemed closer to the spirit of the meeting to assemble the outputs: the collective
explorations, insights and syntheses that emerged from our intense and productive
week together. Accordingly, we invited participants, as individuals or in small groups,
to write about the ideas from the meeting that changed their thinking the most and
that they felt would be important to share. Many chose to do so in a longer (∼2500-word)
format: serious, scholarly, well-documented articles written to be as accessible as
possible to a general readership. Several opted for a shorter (∼800-word) format that
captured ‘ideas in evolution’: thinking that seemed too interesting and important
to lose, even though it was not yet fully worked out. All of the submitted manuscripts
were peer-reviewed.
Collective wisdom that emerged in the course of the meeting
The resulting papers fell naturally into six structural groups:
structure of improvement knowledge;
discovering and defining sources of evidence;
social determinants of action;
importance of cross-disciplinary work;
challenges of professional education;
rethinking methods of inference.
What follows is a summary of the key elements expressed in the cluster of papers that
came together in each group.
Structure of improvement knowledge
Frank Davidoff, in Systems of service: relections on the moral foundations of improvement,
contrasts the ways in which ‘evangelists and snails’ think and work to serve patients
better.2 Both are motivated by the professional commitment to ‘unceasing movement
towards new levels of performance,’ yet each is convinced that their approach is more
acceptable on moral grounds. Both approaches are arguably essential, which presents
us with the challenge of combining two orthogonal approaches without losing the identity
or unique value of either. He suggests that rapprochement may not only be possible,
but may already be under way.
In a short piece, Davidoff, in Heterogeneity: we can't live with it, and we can't
live without it, also notes that, if it is to be useful, knowledge for improvement
must accept the value of both ‘homogeneity’ and ‘heterogeneity’ in the effects, populations
and diseases we work with.3 This will require attention to our language, categories,
methods and rules of inference.
Paul Glasziou and colleagues, in Can evidence-based medicine and clinical quality
improvement learn from each other?, invite us to recognise that efforts to learn ‘the
right thing to do’ (ie, be informed on evidence-based medicine), and to ‘do the right
thing’ (ie, apply that knowledge reliably in system-level, data-driven quality improvement)
are two sides of the same coin in producing the best possible healthcare.4 We will
need to express their complementarity, and integrate them, at every level: in the
care we provide, the classes we teach and the assessments of excellence that we make.
Unfortunately, this global vision is not yet widely understood; nor is it widely implemented.
John Øvretveit, in Understanding the conditions for improvement: research to discover
which context influences affect improvement success, suggests that studies of improvement
interventions might be more useful if the details of the intervention itself, its
actual implementation and the context into which it is introduced were described more
completely and clearly, including precise accounts of what actions were taken to carry
out which changes.5 If differences in outcomes were due to the context of the improvement,
and if the intervention changed as implementation progressed, what were those differences
and those changes?
Rocco Perla and Gareth Parry, in The epistemology of quality improvement: it's all
Greek, observe that since Aristotle we have known that ‘How do we really know?’ (or
‘What is true knowledge’?) is a complex question, the answer to which involves both
knowing and believing. Improving the quality, safety and value of healthcare demands
careful attention to both dimensions; scientific advancement of the improvement of
healthcare will demand unflinching and open consideration of ‘how we know.’6
Laura Leviton, in Reconciling complexity and classification in quality improvement
research, reminds us that pioneering naturalists such as Darwin have long recognised
the value of disaggregating and engaging in deeper exploration of the ‘parts.’7 The
care of patients and its improvement invite attention to the ‘wholes’ which are formed
by the synthesis and integration of the (often) better understood ‘parts.’ She describes
the potential power of a yet-to-be-developed taxonomy for the elements or ‘parts’
of improvement interventions, their outcomes and their contexts. Development of that
taxonomy might progress by collecting and describing exemplars, identifying essential
elements for classification, pattern matching and never-ending reflective rematching
in practice.
Discovering and defining sources of evidence.
Ross Baker, in The contribution of case study research to knowledge of how to improve
quality of care, argues that case studies can, and often do, offer unique insights
into the novel aspects of phenomena central to the improvement of healthcare: the
adoption of innovation, boundaries between professional groups and team learning processes,
for example.8 Such studies use both qualitative and quantitative data about improvement
in context; they can inform the development of more robust theory that links problem,
intervention and outcome. He notes further that case studies are particularly important
in understanding why or how things work in real life, rather than in theory. They
are, however, methodologically demanding and require particularly careful collection
and analysis of data from diverse sources.
Duncan Neuhauser and colleagues, in The meaning of variation to healthcare managers,
clinical and health-services researchers, and individual patients, note that the classical
work by Shewhart, Deming and others focused sharply on exploring and understanding
unwanted variation as a key to redesigning a healthcare system with the highest possible
quality, safety and value.9 They observe that managers, researchers and patients/care
givers are each trying to answer different questions as they work with unwanted variation.
They describe and illustrate some of the methods available for each group as they
struggle with the problem of variation.
Bo Bergman and colleagues, in Five main processes in healthcare; a citizen perspective,
offer a ‘citizen's eye’ framework of healthcare at the macro level.10 Seeing things
from this perspective invites attention to the relation between disease and the lived
experience of illness and its prevention; between the process of delivering care and
receiving it. Meaningful improvement is improved by having different ‘eyes’ view the
processes involved.
Social determinants of action
Ann Langley and Jean-Louis Denis, in Beyond evidence: the micropolitics of improvement,
suggest that specific improvement efforts will usually fail unless they take into
account the pattern of interests, values and power relationships that surround them.11
The inescapable conclusion here is that successful implementation of improvement programmes
requires an understanding of organisations as political systems, and management of
the relationships, particularly the power relationships, that are involved. The authors'
extensive experience in observing improvement in action has allowed them to (1) recognise
the distribution of costs and benefits among patients, providers, organisations and
society; (2) see a variety of value systems and interests at work; and (3) appreciate
that most changes for improvement have both a hard scientific core and a soft, context-specific
and largely social periphery.
Mary Dixon-Woods and colleagues, in Problems and promises of innovation: why healthcare
needs to rethink its love/hate relationship with the new, note that the evaluation
of innovation is often too narrowly focused to understand the system-wide effects
of new practices or technologies.12 They describe three paradoxes of ‘the new’ that
are often present and which illustrate the situation: (1) the all-too-common uptake
of the dubious, and rejection of the good; (2) the wisdom and failings of democracy
as the remedy for such misplaced judgements; (3) the law of unanticipated consequences:
improvement requires change, but change always generates more challenges. Recognising
that a different approach may be needed, these authors consider asking different questions:
(a) What is the evidence the (intervention) can and does improve outcomes in other
settings, recognising that the art and science of generalisation are inherently difficult?
(b) What training and support systems will be needed before an improvement intervention
is introduced, in order for it to realise its full potential? and (c) How should we
monitor the introduction of that intervention? In short, they invite us to look before,
after, up, down and to the sides, with each innovation.
Joanne Lynn and colleagues, in Clarity and strength of implications for practice in
medical journal articles: an exploratory analysis, empirically examined two literatures
for practitioners—clinical and management—and found two different norms for recommending
action.13 The majority of original articles from three leading healthcare clinical
journals (68.6%) simply stated that one intervention was (or was not) different from
another in its effects. Reports in these journals directed a particular action (‘therefore,
x should be done’) only 25.5% of the time. Only one article gave further instruction
on how to implement the changes. Two-thirds of the reports called for further research.
Half used tentative language. In contrast, original reports in management journals
nearly always specified who should use the information, drawing from over 60 types
of potential users, whereas only 23.5% of reports in the clinical journals explicitly
named the targeted agent, and then overwhelmingly targeted only physicians or clinicians.
They conclude that authors and editors of the clinical literature should consider
testing clearer, more direct and more consistent ways to present the implications
of research findings for practice, perhaps using as models the structured methods
employed in certain clinical guidelines.
Joanne Lynn, in Building an integrated methodology of learning that can optimally
support improvements in healthcare, suggests that we lack methods for building the
knowledge we need to guide true healthcare reform.14 She argues that we assume we
can use the sources and methods that, in the past, have helped us build the enormous
body of evidence for the efficacy and safety of clinical tests, drugs and procedures.
Unfortunately, those sources and methods do not ‘take us all the way’ to reform; from
her example of decreasing falls in older people, it is easy to see their limits. Our
conventional methods of accumulating evidence can help us understand which medications
are likely to have which effects, but they cannot help us choose the optimal combination
of individual patients, and the settings in which they live, with tools for clinical
management. She notes further that there is an almost unending set of ‘comment points’
between health policy and the improvement of care, and challenges us to find the voice
to make those comments.
Importance of cross-disciplinary work
Jean Bartunek, in Intergroup relationships and quality improvement in healthcare,
calls our attention to intergroup dynamics, such as those that are associated with
social identity, enable communities of practice and contribute to the formation of
professional identity.15 These factors all allow healthcare professionals to gain
a sense of mastery and joy in work, but at the same time can be important sources
of isolation, friction and inefficiency. She suggests that fostering dual identities
for workers prepared in different professions can contribute to better intergroup
relationships, as can fostering communities of practice, and making explicit the positive
examples of cross-professional groups as part of the professional socialisation process.
Molly Cooke, in Expert patients—learning from HIV, describes her own journey of moving
from a mindset of ‘patients versus providers’ to one of ‘patients and providers versus
the burden of illness.’16 She notes that this transition can free up enormous energy
and can generate deep satisfaction for both patient and provider.
Don Goldmann, in Ten tips for incorporating scientific quality improvement into everyday
work, provides a ‘nuts and bolts’ guide to incorporating solid, carefully planned
improvement initiatives into daily clinical work.17 Drawing colleagues from a wide
spectrum of disciplines into the work of developing and testing explicit operating
principles such as these can make it easier to study, maintain, extend, replicate
and report improvements achieved.
Charles Vincent and colleagues, in Multidisciplinary centres for safety and quality
improvement: learning from climate change science, suggest that bringing together
representatives of diverse professional disciplines geographically, intellectually
and socially is likely to create important and entirely new ways of improving the
quality, safety and value of healthcare.18 But achieving robust, effective cross-disciplinary
groups is a demanding task. It will require the development of contexts that drive
towards practical goals, enjoy stable financial support, attract thoughtful people
from traditional settings and career pathways, and sustain them in a new and seemingly
alien environment. A model that illustrates the value of this approach may be found
in centres that have coalesced around the issue of climate change.
Challenges of professional education
Molly Cooke and colleagues, in Mainstreaming quality and safety: a reformulation of
quality and safety education for health professions students, note several developments
that will be necessary to bring learning about healthcare improvement into the mainstream
of health professions education.19 First, improvement must be seen as part of all
clinical encounters. Second, students and their teachers must become co-learners as
they collaborate to improve the care they are giving and learning about. Third, improved
quality and safety must be seen as arising from interdependent work among professionals
rather than the knowledge and skills of individual practitioners. Fourth, outcome
assessments must focus less on what individual learners know and can do, and more
on how care teams' patients fared, and how well system improvements actually worked.
Items 3 and 4 in particular offer the opportunity to explore the promises that underpin
interdependent work: promises to patients about the performance of the care system,
its outputs and the roles of individual providers within that system; and promises
to coworkers about reliable performance of one's own work in relation to that of others.
Rick Iedema, in Creating safety by strengthening clinicians' capacity for reflexivity,
observes that real-time care giving is a complex event in which providers interested
in better safety must reflect and reflexively act.20 He points out that in situ videography
makes it possible to learn and reflect on the work and on the reflexive actions that
are embedded in these real situations of healthcare. This process can enable practitioners
to question their own habits in a way that can impact on who they are and how they
relate.
Rethinking methods of inference
John Øvretveit and colleagues, in Increasing the generalisability of improvement research
with an improvement replication programme, suggest that purposeful, studied replication
of improvement programmes is the most direct way to increase the generalisability
of improvement strategies, albeit a demanding task.21 Meaningful replication requires
careful description of the context as well as the intervention, noting the adaptations
made as the intervention unfolds, and as repeated tests of the same intervention are
carried out in different and diverse settings.
Lloyd Provost, in Analytic studies: a framework for quality improvement design and
analysis, challenges us to recognise that traditional statistical inference, as found
in ‘enumerative’ studies, makes possible actions that are applicable only to the system
that was studied, and as it was when it was studied; the time dimension is essentially
lacking in such inference.22 The results of ‘analytical’ studies, in contrast, apply
to actions on systems under the changed conditions in which they exist at future times.
Since change over time is essentially the defining characteristic of improvement,
the design, execution, evaluation and reporting of improvement thus require an analytical
approach. We are really just beginning to understand the profound implications of
this reasoning, and profit from those insights.
Steven Goodman, in Confessions of a chagrined trialist, observes that ‘everything’
changes when an intervention is intended to affect individual or group performance,
rather than patient biology.23 He notes that we live and build knowledge in our own
cognitive and experiential frames. Fostering meaningful cross-frame experiences can
permit the increased mental agility; reflection on personal experiences, stories and
responses can open curiosity and change the questions asked.
A synthesis
The human reality of healthcare is easy to lose in the proliferating jungle of inanimate
technical wonders that are looked to increasingly as the way we will ‘really’ get
better healthcare. But the wisdom captured in the discussions at Cliveden suggests
that we will continue to be deeply disappointed if we expect biological wizardry and
technical fixes, for all their power and value, to do the job by themselves.
More importantly, this wisdom asserts that because healthcare is, at its core, a giving
and receiving by sentient human beings, as individuals and in social groups, the real
power for improvement will therefore lie in mastering the complex realities that drive,
and that inhibit, human performance, professional behaviour and social change. For
example: the expression of individual and group self-interest; the ways that people
assert power and control; the strength of group identity and communities of practice;
the mysteries of context and its influence; the moral assumptions that underlie methods
of evaluation; the importance of belief, as well as understanding, in knowledge; the
strengths and limitations of ‘group-think,’ also known as, democracy, the enormous
and mostly untapped power of cooperative, cross-disciplinary learning and action are
all illustrative. In short, we need to modulate our magical thinking about the value
of tools and techniques by seriously entering into the ‘alternate universe’ of Aristotelian
phronesis—becoming capable of action with regard to the things that are good for humankind.24
It seems unlikely that we can distil a single overarching principle from the wealth
of thought that came out of the meeting at Cliveden, and it might even be counterproductive
to try. But if we were to make the effort, the result might look something like this:
‘Even at its most scientific and technical moments, the provision of healthcare is
always—always—a social act.’
So much more seems possible as traditional boundaries of thinking are extended, as
context assumes its legitimate place of importance, as we explicitly recognise the
benefits of psychology, sociology and other disciplines, and integrate those disciplines
better into our ways of caring and learning as health professionals.
So many possible actions emerge—for example: revise the curricula of health professional
education; lobby research publishers and funders; develop and appoint leaders capable
of using these ‘sciences of improvement’; and so many more.
How can we best ground, develop and nourish the vitality of these efforts at building
and applying knowledge, while simultaneously obtaining the leverage needed for this
much change?
Though developed outside this Colloquium, Shneiderman described collaboration-centred
socio-technical systems that were needed to study the integrated interdisciplinary
problems in the real world. He called them ‘collaboratories.’25 A ‘collaboratory’
around the scholarship and science of improvement for graduate study in a variety
of relevant disciplines seemed timely as one important way to explore the formal advancement
of the science of improvement in the ‘real world.’
A proposal for action: a new training programme based on the multiple epistemologies
informing healthcare improvement
Over the years, the application of biomedical science has illustrated the benefits
of having not only expert clinical practitioners but also scholarly leaders from other
disciplines committed to pushing back the boundaries of knowledge. The improvement
world has yet to realise this benefit at scale. Improvement is still regarded by some
as the domain of the enthusiasts, evangelists even; light on theory, and even lighter
on hard, peer-reviewed evidence. But improvement can and should be rigorous and systematic,
and, as illustrated by the series of articles in this supplement, it does have its
own growing body of empirical evidence to guide practice. What it does not yet have
is an adequate number of academic leaders, theoreticians and empiricists, driven by
a spirit of enquiry, who can extend our understanding of what works where and why—the
intellectual tools we need to improve patient care. This is not the kind of science
practised in darkened rooms or in pristine laboratories. It is a highly applied science;
it deals with the complex, messy problems in the ‘swamps’ of the real world, rather
than the well-formulated hypotheses of the academic world. The tools at its disposal
are equally complex. Its development requires scientists to have a deep understanding
of the environment within which their work is applied and an intimate relationship
with both the practitioners and those who use the service.
In many countries, we will discover handfuls of such people, most of them self-trained,
who have found their way into the improvement world more by accident than by design.
If improvement science is to flourish, we argue that the next generation of improvement
science leaders will need to be developed in a more purposeful way. The Health Foundation,
an independent charity based in London, England, is rising to this challenge. In late
2010, it launched a new training scheme which aims to produce the future leaders of
improvement science in the UK. Our vision is that these individuals will, within 5–10 years,
be leading many of the developing partnerships between higher-education institutes
and health services; they will be bringing together academics, clinicians and managers
from across sectors and disciplines in a common endeavour to develop the knowledge
base that underpins improvement.
To ensure the quickest and safest return on investment, this scheme will in the first
instance be aimed at postdoctoral scientists and scholars. Applicants will need a
track record of high-quality, applied research in the field of quality of care and
formal training in any discipline that makes a useful contribution to the science
of improvement. Given the applied nature of improvement science, it is likely that
many, but not all, will have experience of providing service in either a clinical
or managerial role. The duration of the fellowship will be at least 3 years, and it
will comprise not only a programme of research, but also opportunities to become expert
in all aspects of improvement science, and develop the highest calibre of leadership
and influencing skills.
Successful applicants will be hosted by academic institutions with a track record
of support for postdoctoral students, a reputation as a leader in the field of quality
of care and improvement research, and effective existing partnerships with local healthcare
services. All of the host institutions will work closely together to ensure a sustainable
learning environment that benefits the training fellows collectively as well as individually.
The Health Foundation is committed to supporting learning across national boundaries,
and to this end has established an international network of leading experts in the
field of improvement science. In addition to local supervision and mentorship from
the host institution, the fellows in this new programme will have access to this network
of experts, and will build international collaborations to help develop the knowledge
base of improvement.
If the scheme is successful, we will see within the next decade a growing and highly
influential cohort of leaders of improvement science in the UK. In parallel, the Health
Foundation would like to see similar schemes in other countries that are operating
on a similar model, accessing the international network and contributing to a collective
global endeavour to strengthen improvement science in their health sectors. No one
among us underestimates how difficult it will be to attract the brightest of talents
to a new specialty, encourage a genuine shared understanding between disparate academic
disciplines and successfully align the array of incentives in the academic and health
sectors. The challenges are great, but potential benefits even greater.
Conclusion
We submit that it is knowledge—both knowing what and knowing how, episteme and techne;
knowledge that we will continue to seek, build, share, use, assess, recognise and
reward—that enables (and constrains) what we can do to improve the quality, safety
and value of healthcare. Further, it is our belief that the work done in preparation
for, during and following this meeting is only the beginning of unending and ever-expanding
future work towards that knowledge. We hope this invites others to the journey we
have shared so far.