121
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      If you have found this article useful and you think it is important that researchers across the world have access, please consider donating, to ensure that this valuable collection remains Open Access.

      Prometheus is published by Pluto Journals, an Open Access publisher. This means that everyone has free and unlimited access to the full-text of all articles from our international collection of social science journalsFurthermore Pluto Journals authors don’t pay article processing charges (APCs).

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Review Article: A Framework for a Safe and Ethical Healthcare System

      Published
      review-article
      Prometheus
      Pluto Journals
      Bookmark

            Main article text

            Safety and Ethics in Healthcare—A Guide to Getting it Right

            Bill Runciman, Alan Merry and Merrilyn Walton

            Aldershot, Ashgate Publishing, 2007, xxv+334 pp., US$59, ISBN 9780754644378 pbk

            Modern healthcare is clearly beneficial for the vast majority of patients, and is safer and more effective today than it has ever been. Yet, for an unlucky minority, there remains a darker side to healthcare however, involving devastating treatment‐related harm. In addition, large inequalities in the delivery of care exist throughout the world. The authors of this book offer one of the most comprehensive guidelines available for finding solutions to these important problems, and do so in terms of a well‐developed and researched theoretical, ethical and practical framework.1

            Harm and Healthcare

            Avoidable harm caused by the process of healthcare itself, rather than by any underlying injury or disease, is called iatrogenic harm. Examples of such harm would include surgery on the wrong side of a patient because of poor identification checks, brain damage due to a failure in oxygen supply during anaesthesia, and an adverse reaction to inadvertently giving a patient an incorrect drug because of the many look‐alike drug names and labels which exist even in modern hospitals.

            Most of these kinds of iatrogenic events are exacerbated by emergency conditions and numerous environmental factors that set up a situation of an accident waiting to happen. Despite the rapid technological advances in almost every aspect of modern healthcare, the rate of iatrogenic harm remains unacceptably high and exacts enormous human and financial costs. At the national level, iatrogenic harm leads to more deaths than those from AIDS or the road toll. It has been estimated that each year in Britain and the United States alone, hundreds of thousands of patients are injured during their treatment, tens of thousands are killed and billions of dollars are spent on additional care due to iatrogenic harm.2

            Major national reports in the United States and Britain highlighting the problem and calling for action appeared over seven years ago, including the setting of the goal of a reduction in error in healthcare by 50% in five years.3 However, despite this, and high levels of professional and public concern, few significant improvements have been achieved in this time, and certainly nothing like that required to meet the goal of a 50% reduction in error.

            People, Systems and the Treatment of Patients

            One of the largest sources of iatrogenic harm stems from poorly designed medical systems. Many high‐technology industries such as the nuclear power and aviation industries have employed systematic approaches to safety almost from their inception. A plane crash or a major nuclear accident could kill and injure people en masse, thereby constituting a high‐profile disaster demanding an immediate and definitive response.4 Iatrogenic harm in healthcare, however, affects one patient at a time in a much lower‐profile way. Today, healthcare remains one of the last complex, high‐technology industries to begin adopting a systematic approach to safety. In the year 2000, the Institute of Medicine in the United States claimed that: ‘healthcare is a decade or more behind other high‐risk industries in its attention to ensuring basic safety’.5

            Historically, the dominant paradigm for safety in healthcare has focussed on the doctor–patient relationship. Safety experts call such a narrow, person‐focussed approach to safety, the person approach as it means that the onus for safety is placed squarely on the shoulders of the individuals immediately involved—individuals who will also be blamed for not trying hard enough if anything goes wrong. The trouble with this approach is that the rapidly increasing complexity and sophistication of medical technology is rendering it increasingly inadequate to maintain acceptable levels of safety. Modern hospitals offer more effective treatments than ever before, but the sophisticated technology which make these new treatments possible also comes with a host of new failure modes and features which can pre‐dispose clinicians to make mistakes.6 Despite their conscientious best efforts, doctors are human, and error is a statistically inevitable and non‐negotiable concomitant of being human. Thus, under the person approach, when mistakes are inevitably made, individuals will be blamed for their carelessness, laziness, or other character weakness, and told to try harder to avoid error in the future. After such censure, typically little will be done to remove or redesign the error‐prone aspects of the work environment that precipitated the mistake or failure in the first place. Therefore, the persistence of the person approach in healthcare safety actually guarantees that similar errors will continue to be made in the future and that further conscientious individuals will be blamed for their carelessness.

            One of the central points made by the authors of this book is that to improve safety in healthcare the focus of interventions needs to expand from individual doctors and patients to include the wider organisations and systems within which doctors work and patients are treated. Such an emphasis on systems has understandably been called the system approach to safety—an approach diametrically opposite to that of the person approach. Redesigning work systems is a great deal easier than changing error‐prone human nature. Hence, the system approach views accidents and failures as indicators of faulty work systems and an opportunity to increase safety by permanently removing such sources of error from the work environment through appropriate redesign.

            Safe Systems Design

            A dramatic example of the application of the system approach is illustrated in the changing methods of the administration of a drug called vincristine. Vincristine is a safe and effective chemotherapy drug if administered correctly into a vein, but if injected into the spinal canal is highly toxic, causes extreme pain and is usually fatal. Therapy with vincristine often involves the administration of a second drug called cytosine, which must be given by injection into the spinal canal. Since 1985, despite the grave potential dangers of vincristine administration being widely known, and many procedural guidelines being in place, approximately one patient a year in Britain alone is killed or becomes a paraplegic because the routes of administration for these two drugs are mixed up.7 Until very recently the response to these disasters has been essentially consistent with the person approach: the doctors involved were blamed for their carelessness, usually suspended, or had manslaughter charges brought against them; safety procedures were again re‐written to further underscore the potential dangers of vincristine administration; chemotherapy administration protocols were made longer and more onerous; and the hospital carried on operating in more‐or‐less the same way as before, until the disaster happened again somewhere in Britain, usually in about a year’s time. Clearly the person approach to the avoidance of this kind of medical disaster does not work. Without changing the physical systems involved in administering vincristine there will always be a non‐zero risk of cross connecting two similar‐looking syringes to two identical injection ports. Recently, the system approach has been applied to the problem of the safe administration of chemotherapy drugs like vincristine.8 A new tubing set has been developed for the administration of drugs into the spinal canal involving a physically different connector such that a syringe intended for administration into a vein cannot be connected to it. Despite the simplicity of this idea, and the fact that similar safe‐design approaches are common in many other industries, such system approaches are relatively new in healthcare.

            The use of physically different injection ports is an example of what is called a forcing function—that is, a safeguard that physically prevents undesirable or dangerous alternatives. Numerous other system‐based safeguards for increasing safety in healthcare are beginning to be adopted, including colour coding and bar coding (the latter recently being endorsed by the United States’ Food and Drug Administration9). The introduction of such safeguards can allow the development of further safety features—for example bar‐coded drugs can not only facilitate identity checking, but can also allow the introduction of smart alarms to indicate expired stock or even a known patient allergy to a drug. Effective system re‐design must target known problem areas in the work environment, must change the way systems operate in order to help rather than hinder the actions of clinicians, and should be followed‐up after implementation to demonstrate that the innovation has achieved these goals. Such a process of making local system improvements in response to known problem areas constitutes the first loop of what the authors call quadruple‐loop learning. However, for a safety initiative or safeguard to become universally adopted it must undergo the full quadruple‐loop process. This means the new safeguard needs to be accepted and used routinely at an institutional level (double‐loop), mandated at a professional or speciality level (triple‐loop) and endorsed by national and international regulatory bodies (quadruple‐loop).

            In the mid‐1980s, a technology called pulse oximetry began this quadruple‐looping learning process. Pulse oximetry allows the continuous monitoring of the level of oxygen in a patient’s blood during anaesthesia, an important safeguard against failure in oxygen supply. To begin with, only individual anaesthetists used the new technology (single‐loop learning). Hospital managers at the time resisted hospital‐wide adoption of the technology on the grounds of cost, thus delaying the achievement of double‐loop learning. However, evidence was later published which strongly supported the effectiveness of pulse oximetry and the Australian and New Zealand College of Anaesthetists mandated its use for every patient (thus achieving triple‐loop learning). Quadruple‐loop learning was achieved in 1994 when pulse oximetry was endorsed by the World Federation of Societies of Anaesthesiologists as part of an International Standard in Anaesthesia Safety. Few anaesthetists throughout the world would now consider it acceptable to conduct an anaesthetic without the use of pulse oximetry.

            Barriers to Change

            One of the most common barriers to the introduction of safer systems in healthcare is the requirement by hospital managers for the demonstration of a return on investment (ROI). Aside from the fact that most people would agree that killing and injuring fewer patients is a good thing, hospital managers also generally want to know whether it will save the hospital money. Two problems immediately present themselves in the case of ROI analyses in hospitals: firstly, although it is generally considered to be very costly, little information is usually available on just how much iatrogenic harm will be avoided with the introduction of any particular safety initiative; and secondly, just how much is a saved human life worth anyway? Although precise figures are not available to answer either of these questions, the absence of information should not be used to dismiss safety innovations out‐of‐hand as too expensive. The follow‐up monitoring of safety systems, used to identify whether the innovation is working as intended, can also allow accurate assessment of the savings resulting from their impact on patient outcome. This information can then form the basis of the savings side of a sound ROI analysis.10 Furthermore, the scope that exists for safety improvement in many aspects of healthcare is such that safety innovations that are well designed and implemented should pay for themselves many times over in the medium‐to‐long term in any analysis (for example, the new tubing set for the administration of vincristine and other chemotherapeutic drugs). Given this, it is perhaps surprising that so much organisational change currently occurs in healthcare that fails to achieve any positive outcome whatsoever.

            Bad Organisational Change

            The job of containing costs in a modern hospital is not an easy one, even at the best of times. New medical equipment and technology generally costs more each year, while patients presenting for treatment are often sicker and older than in previous years and require more costly care. By comparison, hospital budgets are essentially fixed, and in order for the hospital to keep functioning and stay ahead of the workload, managers must find ways to deliver more healthcare for each dollar spent. However, most managers in modern hospitals are not doctors, but come from a commercial or professional management background—a background that often presumes that the principles of effective management are the same whether you are dealing with retail sales or the treatment of patients. In fact, many hospital managers know little about medicine or healthcare, and view the organisation and organisational change entirely from the perspective of ‘the bottom line’. Unfortunately, exclusive focus on the bottom line, and a frequent lack of understanding of the down‐stream clinical consequences of short‐term cost‐cutting decisions, often leads to the creation of false economies. For example, managers may change the supplier of a particular drug or piece of equipment in order to save money, only for clinicians to discover that the cheaper product is inferior or unsuitable for a large number of patients, requiring stocks of the old product to be bought back in, and leaving stocks of the new, cheaper product to expire unused.

            As the authors also point out, another form of change that is a favourite with managers, politicians and bureaucrats in response to problems in healthcare is to restructure the entire organisation from the top down. Restructuring is a very expensive process involving the hiring of new layers of management in order to draw new lines of command throughout the organisation, reorganise funding streams, produce new job descriptions for staff, and interview and hire essentially the same people to different posts. Of 20 hospitals in Australia surveyed over a six‐year period, 12 had undergone restructuring once, and four twice. The process disrupts the routine operation of the entire hospital for months and significantly damages the morale of staff for years afterwards. But worse, evidence suggests that restructuring hospitals achieves nothing in terms of increased efficiency, despite this being the express reason given by management for initiating such turmoil in the first place. A further irony of the restructuring process is that ROI analyses are rarely, if ever, carried out after such a large‐scale investment in organisational change.

            In order to reduce bad organisation change, the authors suggest that managers, like every clinician working in healthcare, should undergo appropriate certification. This would mean that ‘generic’ managers would no longer do in healthcare. Only those who had demonstrated enough knowledge of appropriate aspects of clinical care, by completing a certification process, would be allowed to manage a hospital. With lives at stake, such a suggestion does not seem unreasonable.

            Inequalities in the Delivery of Care

            The spheres of influence affecting the delivery of healthcare can be organised into concentric circles around the patient, with each layer representing influence or interaction at a greater distance (these layers are related to the levels involved in quadruple‐loop learning, mentioned earlier). The authors deal with the ethical and safety issues at each layer in turn, and here I have so far touched on the clinician, team and organisation layers. However, perhaps some of the most significant ethical and system problems in healthcare exist at the governmental and international layers.

            Large disparities exist between, and within, nations in the quality and availability of healthcare. Often the delivery of healthcare to those who need it becomes delivery to those who can afford it. Less obvious, however, are the ways in which the methods of funding healthcare influence its delivery, even when adequate funds are available. In the United States, where publicly funded healthcare is not universal, 47 million Americans have no health insurance, and therefore limited access to healthcare.11 Ironically, this is despite the fact that the United States spends more money per capita on healthcare than any other country in the world—approximately US$500 billion a year. In addition, those who do have insurance are often over‐treated due to the commercial imperatives of many healthcare providers. Over‐treatment is a problem that exists in most developed nations to some degree, and involves patients undergoing procedures which will benefit them little or not at all, but which will unnecessarily expose them to the risks of treatment, while consuming a limited resource needed by others.

            Any consideration of the health of nations cannot ignore the fact that a significant contributor towards the burden of death and suffering in the world is war. Not only is war incredibly destructive of good health, but it is also very expensive. Military spending in the United States rivals that of expenditure on health, at US$450 billion a year. In addition, military budgets appear to be a great deal more flexible than healthcare budgets, arguably demonstrating considerable irrationality in the allocation of government funds. In 2007 the United States spent US$43.5 billion on spying alone, an amount believed to have increased by 50% since 2001. The 2007 figure is roughly equivalent to the worth of the entire national economy of Qatar or Croatia, and 10 times larger than the amount spent on spying by the United States’ closest ally, Britain.12

            One third of the United States’ military budget would meet the total requirement for aid around the globe set by the World Health Organisation, and would allow the world’s poorest countries to cut poverty in half by 2015 and eliminate it by 2025. Clearly scope exists to implement a global strategy to reduce the money wasted on wars and redirect it to more productive ends, such as reducing poverty and providing infrastructure and healthcare.

            In Conclusion

            The benefits of modern healthcare are numerous and profound. However, imperfections exist in many aspects of the technology of healthcare and in the bureaucracy of its governance. Traditional person‐centred approaches to safety are becoming increasingly inadequate in the face of the complexity and sophistication of modern medical technology, and new more powerful system‐based safeguards are needed to avoid the large human and financial costs of iatrogenic harm. In addition, millions of people do not have access to healthcare, even in the world’s wealthiest countries. At the national and international levels, considerable waste exists in terms of expenditure on ineffective organisational change, over‐treatment of the wealthy, and war. This book provides a rare overview of the difficulties affecting the safe and ethical delivery of healthcare at all these levels. Ultimately the message of the book is optimistic because it highlights many areas where change is needed, and demonstrates that large‐scale change is possible. The achievement of a more effective healthcare system involves not so much a matter of a need for more resources as such, but a more rational allocation of existing resources within the larger framework developed in this book.

            Notes

            Footnotes

            1. I am a colleague of the second author of this book and we have co‐authored papers together in the area of safety in healthcare.

            2. Institute of Medicine, To Err is Human—Building a Safer Health System, National Academy Press, Washington, DC, 2000; Department of Health, An Organisation with a Memory—Report of an Expert Group on Learning from Adverse Events in the NHS, Stationery Office, London, 2000.

            3. The Quality of Healthcare in America Project, initiated by the United States Institute of Medicine, includes as one of its goals, the reduction of error throughout healthcare by 50% in five years. See: Institute of Medicine, op. cit. See also the British national report: Department of Health, op. cit.

            4. The world’s worst nuclear power plant accident at Chernobyl in 1986 killed 31 people and injured 299 in the first two weeks. The total number of subsequent deaths resulting from cancer and birth defects remains unknown. For accounts of disasters and the responses to them in the nuclear power and aviation industries see: N. Schlager (ed.), When Technology Fails—Significant Technological Disasters, Accidents, and Failures of the Twentieth Century, Gale Research, Detroit, 1994. For a systematic comparison of the safety strategies in the nuclear power industry with those in one of the most safety conscious branches of medicine, anaesthesia, see: C. S. Webster, ‘The nuclear power industry as an alternative analogy for safety in anaesthesia and a novel approach for the conceptualisation of safety goals’, Anaesthesia, 60, 2005, pp. 1115–22. Years before the well‐known, non‐fatal nuclear power plant accident at Three Mile Island in 1979, the United States experienced their first fatal nuclear power accident in 1961—see: W. McKeown, Idaho Falls—the Untold Story of America’s First Nuclear Accident, ECW Press, Toronto, 2003.

            5. Institute of Medicine, op. cit., p. 5.

            6. N. B. Sarter and D. D. Woods, ‘How in the world did we ever get into that mode? Mode error and awareness in supervisory control’, Human Factors, 37, 1995, pp. 5–19; C. S. Webster and D. J. Grieve, ‘Attitudes to error and patient safety’, Prometheus, 23, 2005, pp. 253–63; C. S. Webster, ‘Why anaesthetising a patient is more prone to failure than flying a plane’, Anaesthesia, 57, 2002, pp. 819–20; L. R. Wiener, Digital Woes—Why we Should not Depend on Software, Addison‐Wesley, New York, 1993.

            7. C. S. Webster, ‘Doctors must implement new safety systems, not whinge about them’, Anaesthesia, 57, 2002, pp. 1231–2.

            8. C. J. Lanigan, ‘Safer epidural and spinal connectors’, Anaesthesia, 57, 2002, pp. 567–71.

            9. Anonymous, ‘Bar code label requirement for human drug products and biological products’, Federal Register, 69, 2004, pp. 9119–71.

            10. C. S. Webster, ‘The iatrogenic‐harm cost equation and new technology’, Anaesthesia, 60, 2005, pp. 843–6.

            11. C. Paddock, ‘47 million Americans without health insurance—census report’, 29 August 2007. Available at: http://www.medicalnewstoday.com/articles/80897.php.

            12. ‘The secret’s out—US spends NZ$57b on spying’, 1 November 2007. Available at: http://nzherald.co.nz, keywords ‘US spends spying’.

            Author and article information

            Journal
            cpro20
            CPRO
            Prometheus
            Critical Studies in Innovation
            Pluto Journals
            0810-9028
            1470-1030
            June 2008
            : 26
            : 2
            : 179-185
            Article
            306277 Prometheus, Vol. 26, No. 2, June 2008, pp. 179-185
            10.1080/08109020802061094
            bc32c1b2-6815-4d3f-a3a7-a540d82f5f0b
            Copyright Craig S. Webster

            All content is freely available without charge to users or their institutions. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior permission of the publisher or the author. Articles published in the journal are distributed under a http://creativecommons.org/licenses/by/4.0/.

            History
            Page count
            Figures: 0, Tables: 0, References: 0, Pages: 7
            Categories
            Reviews

            Computer science,Arts,Social & Behavioral Sciences,Law,History,Economics

            Comments

            Comment on this article