The precautionary principle was originally an axiom of scientific forestry, according to which one should harvest only as many trees as will be replaced. Georg Ludwig Hartig first advanced the principle in Germany at the dawn of the Industrial Revolution. Concerns about the potential consequences of exploiting natural resources also exercised his British contemporaries, the classical political economists Thomas Malthus and David Ricardo. Together they were completing the domestication of the concept of ‘Nature’, which the Greeks had portrayed as the indifferent if not erratic dispenser of human fate. However, once the Christian deity in whose image humans are created stood above Nature, the tables started to turn. And once Francis Bacon invented what we now call the ‘scientific method’ in the early seventeenth century, Nature’s own fate was explicitly placed in the hands of humans who were encouraged to experiment to get Nature to reveal its secrets. Since that time, humanity has put Nature on permanent trial. Arguably one downstream effect is anthropogenic climate change. Might not this reveal that Nature is wreaking its revenge?
Already in the early nineteenth century, Malthus and Ricardo were debating this prospect. On the one hand, Malthus argued that if we don’t respect Nature by living within its means, we ourselves will be – and have been – part of its cull. Malthus inspired Charles Darwin’s formulation of the principle of natural selection, though Malthus himself – an Anglican pastor with a strong Calvinist streak – interpreted Nature’s agency as the hidden hand of God. The precautionary principle’s focus on the need to maintain a state of ‘equilibrium’ with Nature comes from this line of thought. On the other hand, Ricardo presumed that necessity is the mother of invention, such that we might innovate our way out of any resource constraints by substituting the fruits of our minds for Nature’s fruits. Thus, Nature’s revenge will never be enough to overcome our ingenuity. Here Ricardo was advancing a version of the ‘proactionary principle’, a cornerstone of contemporary transhumanism (Fuller and Lipinska, 2014). Implied is a conception of efficiency whereby we produce more wealth by disembedding ourselves from Nature.
Over the past two centuries, capitalism has largely followed Ricardo’s lead, which explains why wealth is increasingly ‘weightless’; that is, based more on the flow of ideas than on the actual making of things. However, the energy needs of the computer-based infrastructures that nowadays sustain this weightless world are massive, resulting in the high levels of risk to health and environment that focus more precautionary minds.
Enter Daniel Steel, an analytic philosopher who has written a very boring book about this very interesting and important topic. That Cambridge University Press published the book may be explained by the fact that about half of it has already appeared in some of the best publication outlets in analytic philosophy. But to the reader unimpressed by such things, the book reads like a slightly warmed-over doctoral dissertation, with all the scholasticism, repetitiveness and citation overload that one might expect. But perhaps the worst feature of the book is its intellectual myopia. A reader who comes to this book ignorant of the precautionary principle might be forgiven for concluding that it was invented by analytic philosophers, who in truth are no more than tails wagging the dogs who make the relevant policies.
This point becomes especially clear in the final chapter, which consists of three contemporary policy-based case studies in which precautionary intuitions are in play. They concern the conditions for climate change mitigation, the introduction of bovine growth hormone into agriculture and the general regulatory strategy towards chemical innovations. Each case turns on a comparison of the guiding intuitions of policymaking bodies in two parts of the Western world. Predictably perhaps, American policies always suffer in the comparison. Nevertheless, it is really the only chapter in the book worth reading, unless one is inordinately interested in the minutiae of analytic philosophy. Thus, I will use it to base my critique of the precautionary principle.
It is worth mentioning that the book’s analytic apparatus is hobbled by a preoccupation with discerning the sort of ‘values’ that are involved in the promotion of science and technology – and especially what ‘value neutrality’ might mean in this context. This is to get matters the wrong way round. On the contrary, the nature of ‘value neutrality’ is the only discussion about values worth having. ‘Value neutrality’ should be understood as ‘meta-values’, the rules of the game that everyone agrees to obey, regardless of their other value interests, which will invariably govern their style of play. Any concerns about values should be invested in this initial discussion and then let people and corporations be driven by whatever other interests they might have – as long as they agree to play by the rules of the game. This is the biggest take-home lesson of the methodology that John Rawls (1971) adopted in A Theory of Justice to derive the principles of a just society half a century ago. He glossed ‘value neutrality’ as ‘fairness’, which suggests that he understood the game-like character of any constitutional framework that aspires to present itself as ‘just’.
Considerable debate ensued over the welfarist values that Rawls had packed into his own constitutional principles, the most powerful counter-proposal to which was Robert Nozick’s (1974) libertarian Anarchy, State and Utopia. This period is widely regarded as the high watermark in twentieth-century Anglophone political philosophy. In retrospect what is most striking about the debate was its focus on attitudes towards risk. Those who gravitated to Rawls’ principles of justice as the basis for any just social order were quickly branded by their opponents as ‘risk averse’, or as we would now say, ‘precautionary’. The Rawlsians favoured minimising harm over maximising benefit under conditions of uncertainty. Thus, they had no problem with the idea of a ‘welfare safety net’, whereas their opponents decried the idea’s underlying ‘paternalistic’ or ‘nanny state’ sentiments, which would inhibit the taking of risk altogether, notwithstanding its centrality to the entrepreneurial spirit and scientific and technological innovation more generally.
Rawls’ continuing influence over moral and political thought has given forward momentum to the precautionary principle. As ensconced in the legal framework of the European Union, it targets any ambitious innovation, since significant uncertainty is bound to attach to its consequences, be they good or bad. One institutional outcome of this risk averse mentality has been the introduction of risk assessment as a process separate from risk management, a distinction by which Steel places great store. At stake is the difference between judging risks before something happens (assessment) and after it happens (management). Not surprisingly, when you invest a lot on judging risks before something happens, you tend not to allow it to happen. While ignorance is rarely bliss, knowledge of ignorance often turns out to be hell.
To understand what this means, imagine that you propose a highly falsifiable hypothesis of the sort that Karl Popper encouraged scientists to test. This is a hypothesis, which if falsified, would overturn an established orthodoxy. For Popper, the resulting disruption would be a good thing, constituting a major learning experience for all, regardless of the specific fate of the orthodoxy. But for the precautionary policymaker, such disruption is to be avoided at all cost and so s/he would refuse to test the hypothesis. Steel would no doubt claim that this example caricatures the precautionary principle, since the cognitive turmoil produced by falsifying a hypothesis is not quite the same as the sort of disturbance that might result from introducing a new chemical, which could go beyond the mere liberalisation of the scientific field or even the ‘creative destruction’ of the market. It could amount to a full blown destabilisation of the ecosystem.
This is a good point at which to turn to Steel’s preferred specification of the precautionary principle, as drawn from his final policy-oriented chapter. For the sake of both clarity and convenience, I summarise the main features of his specification as follows:
If we already know that something causes adverse effects to the environment, even if it is unlikely to be the thing that causes the most relevant adverse effects, we should nevertheless endeavour to stop it.
In the case of climate change, if the cost of realising the most plausible worst case scenario is higher than the cost of taking the most aggressive measures now to prevent it from happening, then those measures should become policy.
If animal welfare is included in society’s utility function, then we should not genetically modify farm animals if there is some chance that their lives might be harmed, even if there is little chance of harm to the humans who would consume such animals.
The burden of proof should be placed on industry to show that the introduction of a new chemical will not cause harm to the environment rather than force the state to conduct tests on the chemical’s impact once it has already been brought to market.
The first thing to observe is that these are all things that left-of-centre people say in polite company and in newspaper columns in response to current events. Indeed, I am one of those people. But are they sufficient to define the philosophical horizons of the precautionary principle? Point 1 seems harmless enough, unless of course it ends up anchoring environmental policy more generally, as arguably happened once pollution became the lens through which ecology started to be seen in the 1960s. Point 2 is also prima facie reasonable, yet it does presuppose that those who would bear the so-called ‘costs’ of ‘the most plausible worst case scenario’ – namely, future generations – would judge them the same way we do now. The difference, of course, is that while we imagine a future world much worse than our own, our children would already be living in it without ever having lived in our own supposedly better world. Point 3 is also superficially unobjectionable, though here one would want a clearer specification of the harm threshold for farm animals. Would the precautionary principle also call for rolling back various husbandry techniques that have already become part of animal breeding? I shall conclude with a more extended discussion of point 4, as that could seriously impede the advance of science and technology – and arguably already has in Europe, where this version of precaution has carried greatest legal force.
It is worth recalling that the constitutional model for the European Union is medieval Christendom (Siedentop, 2000). I mean to include not simply Christendom’s signature commitment to natural law (read: human rights and the freedom of mobility) combined with considerable discretion over implementation at the local level (i.e. ‘subsidiarity’), but more importantly that the EU follows Christendom in its attitude towards potential threats to the constitution. And here is where the precautionary principle comes into its own. Take alchemy, which represented one of the greatest threats to Christendom in the Middle Ages, as it aspired to violate the metaphysical principles on which Christendom itself rested, most notably a clear distinction between what God can create (Nature) and what humans can create (the application of Nature). Specifically, alchemists wanted to transmute base metal into precious metal and synthesise life from non-living matter. The Church forbade such metaphysical monstrosities – even without any clear evidence of their occurrence. It was engaged in ‘risk assessment’, as the precautionaries would now say. But this only served to drive the potentially offending alchemists underground, resulting in rumours of achievement followed up by persecutions and often imprisonment, if not death.
Yet no one was any the wiser because the evidence base was confused, generating a climate of fear surrounding innovation for many centuries, as the Church’s precautionary stance discouraged all parties from reporting accurately what had and had not been achieved. Indeed, the word ‘innovative’ itself acquired positive connotations only in the nineteenth century, prior to which it was synonymous with ‘monstrous’ (Godin, 2015). Strong adherence to the precautionary principle could well return us to those days. Indeed, a sign of our times is that the phrase ‘responsible innovation’ is in fashion, suggesting that unless precaution is taken, innovation may not be responsible to society. On the contrary, it is also true that society may not act responsibly towards innovation, unless it adopts the opposite of precaution – the proactionary principle (Fuller, 2018, chapter 7). At stake here is nothing less than humanity’s self-understanding as ‘modern’, since what has made us modern is our propensity to treat uncertainty as a source of opportunity rather than fear.