Our stories are fundamental to our humanity. They connect us to each other. They help us to make sense of ourselves and the world around us. Long before the emergence of the written word, we were telling our children stories, myths and fables. These stories invite our children into a make-believe world to provoke their creativity and imagination. The modern fairy tale is an especially powerful kind of story, one that we frequently invoke as a vehicle for teaching our children critical lessons in human morality. In contemporary culture, it is often through fairy tales that we are introduced to central ideas that will stand us in good stead in later life, particularly if we learn their lessons well. Through these promissory narratives, we begin to learn about the differences between good and evil, that the world is filled with heroes and villains, that things are not always what they seem, and about the power of hope and courage through which our heroes may overcome adversity to triumph over the forces of darkness. The power and appeal of contemporary fairy tales lie in their apparent simplicity, the clarity of their moral message and, of course, their happy endings. We derive comfort and reassurance from the highly stylized world that our modern fairy tale characters inhabit, helping us to navigate our own fears in the face of an unknown and uncertain future.
As the twenty-first century unfolds, we find ourselves in a situation of unprecedented and unexpected instability, which the covid-19 pandemic and the Russian invasion of Ukraine have reinforced and exacerbated. The rapid pace at which our increasingly powerful, sophisticated and ubiquitous networked digital technologies have emerged and entrenched themselves into almost every domain of life has contributed to our present condition. Yet these very technologies also form the focus of a contemporary fairy tale that serves to assuage our trepidation and anxieties about what they might portend. It tells of a beautiful future in which networked digital technologies, powered by data-driven computational systems, solve the grand challenges and collective action problems that threaten our continued survival. In this imagined future world, there is no more poverty, hunger, disease, disability, conflict or climate change. It is a world of superabundance. Not only are all our basic needs met, but all our desires are fulfilled even before we have gained conscious awareness that we possess such desires in the first place. But if we are to secure this happy ending, we must overcome our deepest doubts and fears, faithfully upholding the central tenets of this modern fairy tale. This Digital Enchantment exhorts us to hold firm to its teachings, even when the evidence suggests that by so doing we hasten our own demise. Yet it is this very enchantment to which many, including contemporary politicians and policy-makers, have long been in thrall, with destructive consequences that we are only now beginning to acknowledge.
The Digital Enchantment
So what is the Digital Enchantment? It is a fictional tale, peddled primarily by technology entrepreneurs and technology industry representatives, which succeeded in capturing the imagination of many contemporary policy-makers from the early to mid-1990s onwards as the early internet emerged. At the heart of the story are the remarkable and wonderous powers of digital innovation, capable of solving intractable social problems, with an accompanying moral message exhorting its audience to recognize the importance of leaving the market free and unfettered, thereby establishing and maintaining the optimal conditions in which innovation can flourish. The Digital Enchantment rarely appears as a single, coherent narrative, appearing instead as a recurring set of assumptions, beliefs and claims which occur in a fragmented fashion in the public pronouncements of technology industry executives and policy-makers. The account offered here is my own partial and subjective interpretation, drawn from direct observation, experience and expertise as both an active participant in high-level policy discussions and negotiations concerning the digital policy and regulation at the national, European and international level, particularly over the last five years, and as a lawyer and legal scholar with extensive expertise in the regulation and governance of emerging technologies. Throughout these discussions, a series of connected claims and underlying beliefs regularly recur, organized around three core tenets which, when threaded together, offer a compelling modern fairy tale: (1) digital solutionism; (2) the absence of ill-effects doctrine; and (3) the celebration of unfettered innovation as one of noblest and highest callings of the present age. In the following discussion, this paper outlines each of these tenets, offering examples of public claims made in support of them. It then proceeds to unpack each tenet to demonstrate why they are based on alluring, fictional half-truths, which purport to offer reassurance in the face of our growing unease and anxiety about the kind of future that it might portend while providing clear guideposts to inform our response to the unfolding digital revolution. The final section of this paper argues that the relationship between innovation and legal regulation is far more nuanced and complex than adherents of the Digital Enchantment would have us believe. If we are to preserve and nourish the political and moral foundations of democratic freedom, then we must work towards permanently dispelling the hold the Digital Enchantment exerts over the mind of many policy-makers in favour of a richer, deeper and more clear-eyed understanding of the power and perils posed by the novel capacities of our networked digital age.
The first element of the fairy tale, ‘digital solutionism’, is a variant of the more general concept of technological solutionism. Technology commentator Evgeny Morozov describes technological solutionism as a belief that the internet and smart devices can be depended upon to solve personal and social problems through the sharing and analysis of data (Morozov, 2013). Digital solutionism refines and builds upon this belief. 1 Not only do networked digital solutions offer us reliable solutions to almost any problem through the sharing and analysis of data, but according to Digital Enchantment, their promised benefits are championed by advocates in a manner which implicitly suggests that these benefits will instantly materialize, successfully addressing thorny social problems in a straightforward ‘plug and play’ manner. These beliefs, littered throughout claims made by the CEOs of global technology companies 2 and their chief computer scientists, 3 feature in the promotional materials of global consulting firms peddling a wide range of technology ‘solutions’ to organizational clients, 4 and championed with great conviction by technology industry representatives. For example, Digital Europe (the leading trade association representing digitally transforming industries in Europe 5 ) claims that not only will digital technology ‘solve social problems’ and ‘major challenges’ ranging from grid-scale energy storage, universal flu vaccine, dementia treatment and brain decoding ( IndustryWired, 2019), but in so doing will ‘provide Europe’s people with competitive jobs, better health and better public services’ while creating ‘digital inclusion, green growth, trust … that drives prosperity and creates benefits for European society’. 6 Similarly, a thought leadership paper commissioned by IBM refers to AI as ‘the engine that will power the next age of human progress’ such that ‘failure to participate is no longer a viable business option’ (Linthwaite, 2020). These beguiling promises have, in turn, been enthusiastically taken up by national policy-makers and international policy organizations.
For example, the UK’s Department for Business, Energy & Industrial Strategy (2021) claims that,
[u]sing AI and data, there is an opportunity to accelerate medical research in early diagnosis, leading to better prevention and treatment of disease. Within 15 years better use of AI and data could result in over 50,000 more people each year having their cancers diagnosed at an early rather than late stage. This would mean around 20,000 fewer people dying within 5 years of their diagnosis compared to today. This mission aims to put the UK at the forefront of the use of AI and data in early diagnosis, innovation, prevention and treatment. Success in this mission is one of a number of steps towards saving lives and increasing NHS efficiency by enabling earlier diagnosis and reducing the need for costly late stage treatment. The opportunity – working with academia, the charitable sector, and industry and harnessing the power of AI and data technologies – is considerable. It should lead to a whole new industry of diagnostic and technology companies which would drive UK economic growth. 7
The World Economic Forum (WEF) proclaims that blockchain can ‘enable greater trust and transparency through decentralization, cryptography, and the creation of new incentives’, potentially replacing ‘expensive and inefficient payment systems’ and ‘reshaping supply chains’ (World Economic Forum, 2022) while AI is already ‘helping to feed the hungry while saving the planet’ (Caine, 2020). Hence national governments and regional policy-makers are exhorted by WEF to establish policies that reflect their fundamental importance:
We will need to use every tool at our disposal, and with AI becoming more powerful every day we should encourage more innovators and entrepreneurs to focus on new ways to use this technology to address our biggest societal challenges. (Caine, 2020)
The no ill effects doctrine
The second element of the fairy tale is the presumption that digital technologies created with the intention of solving social and personal problems will not, and do not, produce any (or, any significant) unintended ill effects. This ‘no ill effects’ doctrine is rarely stated as a positive claim. Instead, it typically takes the form of an underlying assumption reflected in, and evidenced by, the Digital Enchantment’s portrayal of digital technologies. It is expressed almost exclusively in a one-sided, celebratory light, omitting (or at least very substantially downplaying) the possibility that the use of digital technologies might generate any adverse or unwanted impacts for individual users, those they interact with, the public at large or the social, political, economic and cultural environment in which they are deployed and intended to operate. Not only are the material impacts of digital technologies ignored or downplayed in this narrative (Crawford, 2021), but so too are the intangible impacts resulting from data-driven systems that rely upon the large-scale collection and processing of personal data. These invariably implicate concerns about privacy and are typically dismissed in one of four ways: (a) the personal data in question have been ‘voluntarily’ made available by the relevant individual; (b) by asserting that whenever one individual can be observed by another, it makes no difference whether that other is another person or an automated surveillance machine that carries out precisely the same task, albeit on a much more reliable, consistent, systematic and continuous basis; (c) collecting and processing data from automatically tracking online interactions are essential to ensuring the quality and integrity of service, facilitating continuous improvement; or (d) the practical impossibility of precisely quantifying these intangible costs means they are best ignored or merely lumped together as ‘other considerations’ in estimating the net benefits of digital innovation (Véliz, 2021, pp. 58-64; see also Janssen et al., 2022). While these responses to individual privacy concerns, namely – you’ve given them up already, that nothing has changed, your data are necessary to maintain and improve the service, privacy costs are not quantifiable so are best set aside – may be far from convincing, at least they acknowledge that the taking of other people’s personal data and continuously collecting, storing (and often selling) data about online user behaviour may not be universally welcome. At their most extreme, beliefs in the Digital Enchantment dismiss concerns about intangible harms and wrongs associated with the operation of digital technologies as trivial and irrelevant. This is nicely summed up by the CEO of Sun Microsystems, Scott McNeally, when asked about the privacy protections safeguards being contemplated for the company’s forthcoming new device (Jini) which would enable consumers to communicate and share processing resources with each other. McNeally curtly responded, ‘You have zero privacy anyway. Get over it’ (Sprenger, 1999).
Unfettered innovation as a basic right which must not be ‘stifled’ by regulation
The third tenet of the Digital Enchantment offers a partial concession to social reality by recognizing that digital services might produce unanticipated adverse impacts. It sits somewhat uncomfortably with the second tenet, which either denies the existence and significance of any adverse impacts associated with digital innovation altogether or dismisses them as unproblematic. But narrative coherence has never stood in the way of a good fairy tale. This third element is composed of two strands. First, it posits that digital innovation is an intrinsically valuable good of such paramount importance that it must be protected as sacrosanct (Ford, 2017). Secondly, it assumes that digital innovation is best nurtured by unrestrained market forces, in which individual technology entrepreneurs are left free to act as vital agents of change. To the extent that there might be any unintended and unwanted impacts associated with digital technology innovation, these are inevitable and unavoidable and must be accepted with good grace as the unavoidable price of social progress. For example, in response to the UK government’s recently proposed competition framework and the Online Safety Bill, aimed at tackling harmful online content, including material relating to terrorism and child sexual exploitation and abuse, the Institute of Economic Affairs issued a briefing paper warning that, if introduced, these laws would be counterproductive, stifling innovation and much-needed investment, to the detriment of ‘entrepreneurs, start-ups and consumers’ (Hewson, 2021). Similarly, several organizations representing the European robotics industry issued a strongly worded press release opposing the EU’s proposed AI Bill claiming that if enacted, it would inflict ‘severe damage’ on European small businesses, robotics companies and innovation, imposing burdens, slowing the speed of innovation and disrupting supply chains ( Business Wire, 2021).
Common to these howls of protest is a lack of any meaningful recognition that these proposed laws are intended to protect the public from harms and wrongs arising from the use of these technologies, and that the rights and interests of others placed at risk by digital technologies are legitimate and worthy of protection. Yet the relationship with digital innovation that we are called upon to cultivate is more than a begrudging ‘you gotta break a few eggs to make an omelette’. Rather, our relationship to innovation should aspire to be something more akin to romantic devotion. The hero of this story is the private entrepreneur who, despite the occasional flaw, courageously risks his (and it is typically a him) labour, capital and energy in the noble quest to solve problems, eradicate diseases, while offering us new adventures and a larger pie. This call to worship at the shrine of innovation characterizes the innovator’s freedom to experiment as a basic right that accrues to all those involved in the innovation process, irrespective of shape and size, ranging from the amateur hacker through to the global digital technology titan. While the origins of this putative liberty right might be traced to the computer enthusiast’s freedom to tinker with software programs created by others, the scope and content of the digital innovator’s right to experiment is far more wide-ranging. It encompasses a presumptive right to appropriate any data about the location and behaviour of any object, creature or person, at any time and anywhere (extending beyond planet Earth into the reaches of outer space) that can be brought within the reach of the digital innovator.
Any resistance or push-back from those who regard such attempts as intrusive incursions is best understood as a temporary aberration: it is only by breaking existing norms and practices that the digital entrepreneur can enter previously unoccupied territory – all of which lies available for colonization and appropriation in the service of building wonderful new digital products and services. Eventually, so the story goes, this initial resistance will give way as sceptics start to appreciate just how important and indispensable these services are. Meanwhile, our entrepreneurial hero must contend with the multiplicity of villains who seek to obstruct his path. Anyone wishing to impose limits to the innovation process or the innovator’s resulting inventions, whether policy-maker, academic expert or civil society activist, must be understood as the enemy of progress. To espouse any such limits is to be ‘anti-innovation’ – fearful, unenlightened Luddites who are blind to possibilities (Ford, 2017, p.3). Meanwhile, technology firms and entrepreneurs can be relied upon to act ‘ethically’ simply because they are the good guys on the side of the angels. The critical lesson is that the innovation process must be left free and unfettered, summed up in the familiar mantra beloved of the digital technology industry: ‘regulation stifles innovation’.
Threaded together, the three strands of the Digital Enchantment combine to produce a compelling narrative that celebrates the almost magical power and indispensable value of unfettered data-driven digital innovation that is capable of seamlessly solving all our problems, large and small. If only we have faith in data and the power of the innovation process, particularly when opposition is encountered and the prospects appear perilous and uncertain, we can look forward to a beautiful future in which our societies no longer need to endure hunger, thirst, illness, disability, poverty or the destructive impacts of climate change. As individuals we will enjoy an abundance of personalized services and conveniences that will perfectly meet our every need.
The Digital Enchantment: fact vs fantasy
The Digital Enchantment bears all the hallmarks of a modern fairy tale. It features a hero who leaves the safety and security of all that is familiar to embark upon a noble quest during which he must confront and slay various dragons which threaten to crush his entrepreneurial spirit, chief among them being the ostensibly benevolent but ultimately short-sighted policy-maker who seeks to impose legal limits on the scope of creative, innovative activity or its resulting technological inventions. It offers us a strong, clear narrative grounded on clear-cut binary distinctions between good and evil, heroes and villains, hope and despair, while reminding us of the vital need to trust in the courage and capacity of the hero to stay the course of his fateful journey in order to emerge triumphant over the dark forces that seek to thwart his honourable mission. At its foundation, it is a morality tale, exhorting us to trust in the power of digital innovation, which is best supported in free and unfettered markets, even in the face of clear evidence of unwanted consequences and ill effects. This tale is, of course, a fantasy. That is precisely what fairy tales are. If we agree that fairy tales and fantasies enrich and inspire us, then why should we worry about the Digital Enchantment?
The problem with fairy tales is not that they are fantasies. On the contrary, the power of the fairy tale often draws from the grains of truths upon which it may rest. In this respect, the celebration of innovation which lies at the heart of the Digital Enchantment is not without foundation. As Cristie Ford reminds us, much of the value we place in innovation is ultimately rooted in that most cherished of goods – human creativity. She comments that most of us would no doubt prefer a world that is constantly being remade by human innovation to one that is shackled to the eternal wheel of unchanging fate (Ford, 2017, p.7). Instead, fairy tales create problems only if we fail to recognize that they are just fantasies, by taking them at face value and thus failing to recognize the problematic stereotypes they perpetuate while prompting us to act out our fantasies in a world that bears very little resemblance to the enchanted world that exists only in our imagination. Like so many contemporary fairy tales, the Digital Enchantment necessarily offers a grossly over-simplified, cartoon version of reality. The real world and the real people who inhabit it are far messier, complex, uncertain and often far more venal and self-interested. As human beings, our character traits, motivations and interests are considerably more complicated, multiple, conflicting, nuanced and impure. If we fail to distinguish the real from the imaginary, we will inevitably find ourselves in serious trouble.
The following section examines how our real-world experience of contemporary digital technologies is at odds with each of the Digital Enchantment’s tenets. It draws primarily on examples from the covid-19 pandemic, which provoked digital innovations of various kinds that promised to reduce the spread of the virus or mitigate adverse impacts as entire populations were required or exhorted to stay at home. However, it also draws on recent experience of other digital innovations, notably blockchain technologies and ‘live’ facial recognition technologies, both of which have been accompanied by bold promises of positive social transformation.
Digital solutionism: promises without proof
Digital solutionism, with its emphasis on the remarkable power of more and better data, offers us something almost magical in power: with a quick sprinkle of digital pixie dust, new data-driven services can be relied upon instantly to fulfil our needs, easily parachuted into any social, organizational or cultural context. It conjures up images of a perfect, multi-talented digital assistant who is at once both a digital Jeeves – a well-trained butler who anticipates and unobtrusively relieves us from the multiple mundane tasks of everyday life – and the ultimate data scientist, devising global solutions that will eliminate poverty, hunger, disease, global warming and every other serious intractable challenge that we have failed to address satisfactorily. Yet the Digital Enchantment does not require the production of any evidence that these technologies will in fact deliver on their grand promises. In other words, not only can we assume that digital solutions have the capacity to solve our problems but, as night follows day, new networked digital systems and services that do emerge will in fact do so, and do so reliably, responsively and without tears.
But experience of contemporary networked digital technologies indicates that these assumptions are often unfounded. Consider, for example, the proliferation of covid-19 apps, originally touted as a critical component in the management of the pandemic. Considerable time, effort and resources have been devoted to the development of these apps in many countries, many of which were intended to automatically track, trace and notify individuals that they have come into contact with carriers of the deadly virus to contain and stem its spread. Yet a meta-review of automated contact tracing, in either identifying contacts or containing epidemics, by a team of University College London researchers found that they offered no evidence of effectiveness (Braithwaite et al., 2020). 8 Our experience of covid apps serves as a vivid reminder that, even if a digital app can accurately, automatically and reliably identify whether two smart devices have been in close proximity and notify the relevant device owners, this provides no guarantee whatsoever that the notified persons will self-isolate to prevent passing on the virus to others. As the British experience reveals, without adequate social and economic support, many Britons could not afford to self-isolate, yet the government offered them nothing. Even if we assume that covid apps will be widely taken up and function as intended, consideration of how people would, in reality, respond and act when they receive a contact notification was, at least in Britain, ignored almost entirely, with devastating consequences (Costello, 2021). Instead, and despite the best of intentions and the effort involved in their creation, covid-19 apps generated little more than surveillance without solutions (Algorithm Watch, 2020).
A similar dynamic of overpromising and underdelivering has accompanied other contemporary digital innovations, including blockchain. The level of hype that has accompanied the potential of blockchain (or ‘distributed ledger technologies’) is difficult to overstate. Advocates claim that blockchain will radically alter the distribution of social and economic power by obviating the need for third-party intermediaries (which have conventionally taken the form of powerful institutions such as nation states and multinational financial intermediaries) through reliance on a peer-to-peer, automated and cryptographically secure digital ledger guaranteeing the validity of transactions between strangers (Yeung, 2019a). As one commentator put it, blockchain represents ‘a shift from trusting in people to trusting in math’ (Antonopoulos, 2014). These beliefs fuelled the dramatic rise and popularity of cryptocurrencies which their advocates, including Twitter founder, Jack Dorsey, claimed could establish economic equality and world peace because of the decentralization they promise for the financial system (Dorsey, 2021). In this utopian narrative, cryptocurrencies would allow individuals to reclaim and enforce rights over their money themselves, relying on each other rather than the state to manage the security and form of their money through cryptocurrency, thereby gaining individual economic sovereignty, and massively improving their lives and economic wellbeing by liberating themselves from the unwelcome and oppressive constraints imposed by the state (Au, 2022). Yet the recent dramatic collapse of crypto exchange FTX (a marketplace for crypto investors to buy, sell and store digital assets) in November 2022, resulting in FTX’s founder and billionaire chief executive, Sam Bankman-Fried, being charged with fraud, conspiracy to commit money laundering and violating campaign finance law (Rushe et al., 2022). 9 The transactions that were publicly visible on the crypto-ledger disguised the real-world fraud that lay behind the digital veneer, reflecting the inescapable truth that records on a database, however decentralized, do not guarantee that those records bear any relation to real value, let alone the underlying facts which they purport to record (Yeung, 2021; Warzell, 2022).
Both the failure of both covid apps and cryptocurrencies to deliver on their bold promises can be attributable to an underpinning digital solutionist mindset. This mindset steadfastly ignores the nature of the contact and interaction between digital technologies and the social world it is intended to inhabit, which researchers have demonstrated is typically unruly, context-specific and unpredictable. Digital solutionism reflects a fundamental failure to distinguish between the functional performance of technology and its operational effectiveness in real-world contexts. It is one thing to configure a digital tool to perform a designated range of functions, but quite another to ensure that it will necessarily translate into concrete benefits via effective action capable of solving complex social problems. This distinction is vividly illustrated in early experiences of live facial recognition technology (FRT), which law enforcement authorities in self-described democratic states are keen to embrace, claiming that the technology will enable them to ‘catch terrorists’ and ‘find missing children’ in open public spaces. Is there a single case in which FRT has led to either of these outcomes? Early trials of the technology by the London Metropolitan Police. entailing the capture and automated processing of images of over half a million faces of individuals as they went about their lawful activities in public and without their informed consent over the course of ten trials, yielded just nine arrests, primarily for minor drug-related and property offences (Fussey and Murray, 2019). In short, the mere fact that FRT software is capable of matching facial images of individuals to those stored in a watchlist does not necessarily nor easily result in the successful and lawful apprehension of dangerous individuals.
The underlying failure of digital solutionism to recognize the vital and inescapable social dimensions that make it possible for many digital technologies to operate effectively in real-world applications is exemplified in the ideology of automation accompanying the rise of networked digital technologies. Although typically portrayed as ‘eliminating’ human labour, conjuring up images of seamless, pain-free task performance, automation often serves to transform labour, concealing it from view while the human tasks essential for mechanical systems to function as intended are rendered more mundane yet mentally, emotionally and often physically strenuous (Munn, 2022, pp.11–12). This is especially true of data-driven technologies that rely on supervised machine learning algorithms, which has spawned a growing data annotation industry projected to reach $US13.3 billion in market value by 2030 (Guo, 2022). These data labellers are often low-paid contract workers in the developing world, who keep the worst of the Internet out of our social media feeds by manually categorizing and flagging posts, improving voice recognition software by transcribing low-quality audio, and helping robotic devices and vehicles to recognize objects by tagging photos and videos (Hao, 2022). Astra Taylor (2018) rejects the term ‘automation’ because of its basis in the myth of human obsolescence, preferring the term ‘faux automation’. Judy Wajcman (2017) observes that, despite claims that digital technologies will facilitate ‘less work’, they serve in practice to facilitate ‘worse jobs’. Despite the rhetoric of ‘seamless integration’, anyone who has experienced first-hand the process by which a new IT system is adopted and implemented across a large organization is likely to appreciate that the gap between rhetoric and reality is very considerable. Even leaving aside the technological work involved in designing and configuring a general IT system to meet the particular needs of any given organization, enormous human and organizational effort is also needed to overcome the formidable challenges associated with successfully migrating from one system to another. This requires considerable hard graft, the need to wrestle with tensions and conflict between competing needs, interests, values and priorities, or to accommodate existing organizational practices and domain-specific cultural norms of appropriateness. There is considerably less dissonance between the fantasy of digital solutions that can be taken up on a ‘plug and play’ basis that predominates in the design of consumer-facing digital technologies. It should not be assumed that the mere fact that a software programme can fulfil the technological tasks it is configured to undertake, and to do so automatically, will necessarily ‘solve’ the problem addressed.
The invisibility or dismissal of ill effects
Another remarkable feature of the rush to create covid apps during the initial period of the pandemic was the belief that, even if these apps failed to help contain the virus, there was no harm in trying. As one machine learning expert and computer scientist colleague remarked early on in the pandemic when expressing his enthusiasm for the British corona app development project, ‘Why not throw the digital kitchen sink at it?’ The naivety of his response was extraordinary. The entire foundation of the UK’s national covid app project was essentially concerned with building an effective population-wide state-sponsored surveillance regime. While effective and accurate public health surveillance is critically important in pandemic management, evidenced by the strategies adopted by the handful of states that managed successfully to contain the virus, the failure of my colleague even to recognize that surveillance necessarily compromises individual and collective privacy provided a stark reminder of the extent to which the second limb of the Digital Enchantment – that well-intended data-driven technologies have no ill effects – remains alive and well. Our bifurcated approach to the development and oversight of digital interventions (relative to pharmacological interventions) in response to the covid pandemic could not be more stark. Our institutional regimes for ensuring that new drugs and vaccines (including covid-19 vaccines, which have been produced at unprecedented speed) are both safe and efficacious are based on careful monitoring and rigorous scientific evidence. These drugs and vaccines are well-established and command widespread respect and public confidence. Our lack of meaningful oversight or obligation to gather any evidence of effectiveness whatsoever in relation to digital interventions highlights the serious blind-spot in the Digital Enchantment’s ‘no ill effects’ doctrine.
The adverse impacts of digital technologies are often intangible in nature, concerned with losses to important social, political and legal values and principles, including respect for individual rights and freedoms (Liu, 2021). Accordingly, because these impacts are rarely directly observable and may not manifest concrete material harm, in contrast to new technologies (such as new drugs and therapeutics) which we now widely recognize may generate serious (and sometimes fatal) adverse impacts to human health, safety and the environment are hence subject to extensive legal regulation and oversight. Although the intangible nature of adverse impacts makes them no less real nor serious, it is easier to overlook them. Another common manifestation of the ‘no ill effects’ doctrine is the ‘equivalence fallacy’ that routinely appears in claims about the value and effectiveness of automated digital solutions when compared with existing approaches that often require considerable human input and interaction. In this view, our existing technologies and practices are to be evaluated solely by reference to their capacity to perform a narrowly defined designated function. Accordingly, if an automated data-driven solution can perform the relevant function so as to produce the desired outcome with equally good results to an existing approach, it should be considered ‘equivalent’. Yet, as Ibo Van der Poel has argued, technological artefacts do not simply fulfil their function; they also produce all kinds of valuable and harmful side effects beyond the goals for which they have been designed or for which they are used. As a result, values enter into our evaluation of these technologies (values relating to safety, sustainability, human health, welfare, human freedom or autonomy, user-friendliness and privacy), all of which are valuable for moral reasons, often because they enable or contribute to people’s ability to live a good life. He observes that there are usually alternative ways to achieve a specific end use, but the alternatives typically differ in their capacity to meet the end, and differ with respect to side effects, and hence with respect to the values with which we can evaluate these side effects (Van de Poel, 2009).
In evaluating technologies aimed at enhancing children’s educational performance (whether psychopharmacological, digital or otherwise), we cannot ignore their side effects. This is equally true of the massive levels of energy needed to power the cryptographic mining that supports blockchain technologies. It is also true of the threats which the use of live FRT by law enforcement authorities in public settings present to the basic liberty of individuals, including the right to freedom of assembly, to engage in democratic process and for individual self-expression, without fear of image data being extracted, collected and algorithmically processed. Yet it is precisely this side-lining of side effects that appeared throughout the covid crisis in the turn to remote learning in the face of prolonged lockdowns and school closures, much to the delight of the booming EdTech industry. The fallacy of the logic underpinning the equivalence notion is revealed in the incredulity expressed by former New York governor Andrew Cuomo when he publicly questioned why physical classrooms still exist at all (Fleming, 2021). For Cuomo, the purpose of a classroom is merely to provide a physical space in which educational material is delivered to students. If automated EdTech services can produce learning outcomes for students that match those of students taught in conventional classrooms by human teachers, then they can be understood as ‘equivalent’.
Yet, if lockdown has taught us anything, it is that children have a vital need for face-to-face, play, interaction and learning within a community of students, led, supported and guided by teachers. These embodied interactions are critical in nurturing children’s social, intellectual, physical and emotional health and development. This is not to deny that online teaching and learning tools have been indispensable during lockdown, but the unthinking belief that school attendance is primarily if not exclusively concerned with academic knowledge and skill development reflects a woefully impoverished understanding of what schools and classrooms are for. Cuomo’s sentiment reflects a larger failure to recognize the value and importance of the relational, human dimensions of our conventional teaching practices and institutions. Worryingly, it may reflect a wider failure to acknowledge the dehumanizing impacts that digital technologies may unintentionally promote. Hitting the ‘play’ button to activate a children’s storytime podcast at bedtime is not ‘equivalent’ to being read a story while sitting in a parent’s lap. The value of the stories-at-bedtime ritual is not derived primarily from the value of the story, but from the regular, reassuring parental attention that this nightly ritual sustains.
Regulation and innovation
At the heart of the Digital Enchantment is the paramount value accorded to data-driven innovation, of such vital importance that it must be permitted to flourish freely, wherever the entrepreneurial and creative spirit leads, subject only to the discipline of market forces. The UK’s rush to dismantle hard-won legal protections to ‘unleash national prosperity’ (promised by then Prime Minister Boris Johnson following Brexit) offers a shining exemplar of the portrayal of regulation as the enemy of innovation, reflected in proposed reforms to British data protection law. In its consultation document, Data: A New Direction, the sponsoring minister states, ‘our ultimate aim is to create a more pro-growth and pro-innovation data regime whilst maintaining the UK’s world-leading data protection standards’ (Department for Digital, Culture, Media and Sport, 2021). Yet the proposed reforms are almost entirely directed towards removing existing data protection standards. The document avoids any reference to individual privacy and data protection as a human right, and fails to acknowledge that the origins of modern data protection law lie in recognition that the systematic storage and access to personal data was a critical enabler of the Holocaust, allowing millions of ‘undesirables’ to be readily identified, rounded up and incinerated with terrifying efficiency (Yeung and Bygrave, 2022).
But concerns about human rights and values are of little importance to those who worship at the shrine of innovation. For them, the underlying logic is simple: innovation is intrinsically good, and benefits us all. Regulation constrains the freedom to innovate and is therefore bad. Accordingly, digital innovation (as a particular species of innovation) must not and should not be regulated – the bad cannot be allowed to prevail over the good. This simplistic account, which is reinforced by the ‘no ill effects’ doctrine, is as pernicious as it is mistaken in its portrayal of both regulation and innovation. Not all innovation is socially valuable. Some innovators are malign and many innovations produce damaging consequences for individuals and for society, often driven by motivations that we would not wish to encourage or endorse. For example, the creation and automated distribution of child porn, misinformation, violent imagery, extremist content and the hacking and hijacking of digital services and systems can all be understood as forms of digital innovation, yet are clearly not the kind that any civilized society should wish to embrace. Our aim should not be to encourage all innovation, but only that which is socially beneficial.
The Digital Enchantment’s portrayal of ‘regulation’ is similarly problematic. By replacing the word ‘regulation’ with the word ‘law’, we find ourselves declaring that ‘law is bad’ because it restricts freedom. Quite apart from the fact that many laws are facilitative and power-conferring rather than freedom-restricting, even if we confine our attention to laws that restrict freedom this does not mean they are inherently bad. Although there may be specific laws for which reform is desirable, few but the most radical of anarchists would advocate a world without law. The dangerous and flawed logic upon which the Digital Enchantment’s portrayal of regulation rests, particularly in the field of digital technology development, can be readily exposed. Without law, there is no basic security, and without basic security, there is no freedom. It is the rule of law that makes civilized society possible, enabling peaceful, trustworthy social cooperation in a community of strangers. Law and our legal institutions are so foundational to the functioning of modern societies that we are prone to taking them for granted, failing to recognize that our legal norms and institutions contribute the vital glue by which the social contract is held together, pursuant to which individual citizens forgo some of their freedom in exchange for the state’s guarantee of basic security (Loughlin, 2000). This necessarily requires the law to establish and enforce norms and institutions for holding persons both prospectively and retrospectively responsible for the impact of their conduct on others. This is precisely why laws are introduced for regulatory purposes: to protect regulatory beneficiaries from harm caused by the activities of others.
The value of regulation is vividly revealed in the dumping of raw sewage into British seas and rivers around the UK more than 770,000 times over the course of 2020 and 2021, resulting in the closure of at least 90 beaches to prevent the public swimming in unsafe water during the height of summer (Horton, 2022a). This on-going British sewage scandal has been attributed to the current government’s refusal to enact laws that would have substantially reduced the dumping of raw sewage into seas and rivers and to ensure that the Environmental Agency is adequately resourced (Horton, 2022a, 2022b). The purpose of legal regulation is (among other things) to constrain activities that violate the rights of others, limiting the power of both public and private actors alike, serving as critical safeguards against harm and other serious abuses of power. It is only by setting and enforcing limits of acceptable conduct and practice that every one of us enjoys the freedom that respect for the rule of law makes possible, confident that others will not pose unacceptable threats to our safety, security and freedom (Yeung, 2017).
Once again, the covid pandemic has illuminated the critical importance of enforceable laws to safeguard against the abuse of power, which governments in many countries have sought to side-line on grounds of the public health emergency. Yet even these so-called ‘states of exception’ do not entitle those in power to do whatever they wish (Greene, 2018). We have witnessed a deeply troubling and reckless disregard for basic legal norms in supposedly liberal democratic countries, with governments engaging in conduct that reeks of corruption. For example, in the early months of the pandemic, the UK government established a ‘VIP fast lane’ for tenderers of protective equipment contracts. This resulted in the contacts of ministers, MPs, peers and officials being ten times more likely than others to win contracts. Meanwhile, the price of personal protective equipment (PPE) sky-rocketed: even body-bags were being charged at fourteen times their previous cost (Toynbee, 2021; Jones, 2021). It was only a British-based crowd-funded judicial review application by the Good Law Project challenging the government’s failure to comply with laws requiring the publication of government contracts that brought some accountability to bear on these dishonourable dealings. When those in power cannot be trusted to act ethically, the law is one of the few remaining institutions to which we may turn when political accountability is left wanting. In other words, although regulation often imposes constraints on freedom, portraying the relationship between regulation and innovation as that of the ignorant strongman crushing the creative genius of the innovator is hopelessly one-sided and unduly simplistic.
Dispelling the Digital Enchantment and beyond
The innovation-worshipping Digital Enchantment offers an unduly simplistic account of the nature of innovation, of regulation, and the relationship between the former and the latter. Yet its appeal has proved difficult for policy-makers to resist. The millions of dollars spent by Big Tech on lobbyists have succeeded in ensuring that policy-makers have remained in thrall to the Enchantment’s spell far longer than might have been expected. In the last four decades, media portrayals have welcomed Big Tech as a catalyst for positive change, thereby reinforcing policy-makers’ willingness to embrace digital solutionism (Atkinson et al., 2019). Even as late as 2010, smart phones and social media were credited with precipitating the Arab Spring by enabling ordinary people to organize, unify and collectively express their demands for political change (Wolfsfeld et al., 2013). Time featured Mark Zuckerberg as its ‘Man of the Year’ (Halliday and Weaver, 2010), Netflix was ‘killing piracy’ (Manjoo, 2011), Google’s founding fathers were among the world’s top ‘technology geniuses’ (Carlson, 2010) and Amazon was credited with providing more choice and liberating convenience to tens of millions of consumers (Sawant, 2014).
It was only after the Cambridge Analytica scandal, exposing how the company had misused Facebook data, that public anxiety about the adverse impact of digital technology morphed into the 2019 ‘technology backlash (the ‘Tech Lash’). The tide of public opinion began to turn against Big Tech as the ill effects produced through digital platforms became impossible to ignore (Atkinson et al., 2019). European law and policy-makers are only now beginning to wake from the Enchantment’s spell: the EU’s proposed AI Regulation, together with its new Digital Services Act and Digital Markets Act, reflect belated recognition that data-driven systems may enable new and potentially powerful forms of deception and manipulation, and that the monopoly power enjoyed by the giant digital platforms enables anti-competitive practices that are damaging both to users and the wider market environment, justifying the need for legal regulation supported by independent regulatory control, oversight and sanction. In addition, the recent fall from grace of Silicon Valley technology companies is reflected in mass employee redundancies. Included are those announced at Meta (Facebook), Amazon, Tesla, Twitter, Netflix (Turner, 2022; Alon-Beck, 2022), with Google reportedly shortly to follow suit (Mukhopadhyay, 2022) and the ongoing chaos at Twitter following its acquisition by billionaire technology mogul Elon Musk. These events indicate that the Digital Enchantment’s powerful grip on European policy-makers has begun to loosen.
But legal intervention is not a panacea, and it cannot be assumed that regulatory laws will provide effective protection against the adverse impacts they are aimed at addressing. History reminds us that regulatory policies can be subverted by industry in ways that lead to regulatory reforms that serve the interests of the industry rather than the general public. Even if there is general agreement that there is a convincing case for legal regulation, this does not settle the question of how best to regulate. How, then, should we proceed? Leaving digital innovation completely unfettered is unacceptable. What is needed is a more holistic, clear-eyed appraisal of the costs and benefits offered by networked digital technologies as well as meaningful evidence that they can and do deliver on their promises in real-world settings. But when it comes to thorny questions about the regulation of digital innovation, where can we look in search of more thoughtful, inclusive and informed guidance to navigate our collective digital future? There are no magic bullets, and no easy solutions. As Cristie Ford observes, technological innovation and the uncertainty which accompanies it are double-edged: although we might be convinced that innovation is generally a good thing, and sometimes even a great thing, we also recognize that it brings with it new risks and new anxieties, and that we do not know exactly where our innovations will lead (Ford, 2017, p.3). It is the uncertainty accompanying technological innovation that helps explain why whether to regulate a new technology is often the subject of intense contestation, particularly when the stakes are high.
Within the modest academic field concerned with investigating the relationship between law and new technology (Brownsword et al., 2017), various recurring challenges are discussed: the pacing problem (which claims that law cannot keep pace with technology); the alleged need for technological neutrality (according to which regulatory rules should avoid fixing upon a particular technology in order to avoid the problem of rapid obsolescence of the rule); and the Collingridge dilemma (which posits that regulators must choose between early intervention when the new technology’s trajectory is highly uncertain and late intervention, by which time technological conventions have become fixed in both technological practice and regulatory assumption). However, as Lyria Bennett Moses (2017, p.11) argues, these various concerns are insufficiently attuned to particular technologies and social context and thus of limited assistance to policy-makers. Instead, she emphasizes that it is the capacity of technological change to create new possibilities that may place existing regulatory frameworks under strain, giving rise to two broad challenges: first, how best to manage new harms, risks or areas of concern; and second, how to manage the poor targeting, in rules and regulatory regimes revealed as a result of technological change (Bennett Moses, 2017, p.4).
The novelties of digitization and datafication: speed, scale and virtually cost-free copyability
We cannot arrive at an informed, meaningful position on the desirability of regulating a particular technological innovation, and how best to do so, unless we have a proper understanding of the technology itself. In this respect, Bennett Moses’s emphasis on newness and capabilities associated with new technologies is worth emphasizing. While her observations are directed at technological innovation generally, many claim that advanced digital technologies, particularly task-specific artificial intelligence, are a game-changer (Council of Europe, 2019), reflecting a belief that there is something importantly novel and powerful about these technologies that will generate transformative societal change. Admittedly, this is itself a moving target given the pace at which computational technologies now advance. Nevertheless, there are a number of distinctive properties of advanced networked digital technologies that, taken together, can be configured to produce algorithmic systems that can be harnessed in the service of the objectives of those who preside over them. These properties include: (a) automaticity, speed and scalability, enabling continuous, highly granular tracking (aka surveillance) of individual online interactions; (b) the virtually instant reproducibility of digital data at negligible cost without traceability; (c) the application of machine learning algorithms to massive behavioural datasets to generate accurate predictions about the object to which that behaviour pertains; (d) their opacity and sophistication and (d) their capacity to channel the behaviour of individuals in particular directions across an entire population.
Thanks to these properties, these systems can be configured in the service of pervasive forms of behavioural manipulation which ubiquitous, automated behavioural data extraction makes possible (Zuboff, 2019a, p.337). It is this capacity that has enabled what Shoshana Zuboff calls the ‘trade in human futures’. It is produced by the rise of surveillance capitalism, a contemporary variant of capitalism rooted in a business model in which individuals worldwide willingly, albeit often unwittingly, allow themselves to be subject to the ubiquitous and continuous surveillance of their online interactions. Individuals give up their personal data in return for ostensibly free digital services. The novelty and power of these digital applications have enabled digital technology titans (surveillance capitalists) to create highly personalized profiles which they can then sell to businesses with a commercial interest in knowing what we will do now, sooner or later. Although we ostensibly welcome the convenience and efficiency of these services, sold to us in the form of personalized assistance, our craving for quick fixes frequently fails to serve our long-term wellbeing (Zuboff, 2019b). Although we may believe that we are acting in our own interests, we may fail to recognize that these are no longer entirely our own (Zuboff, 2019a).
For some, being nudged into taking some action which has no significant or discernible impact on the public interest, might seem inconsequential. However, one of most challenging and serious uncertainties that accompanies technological innovation arises from its unknown, long-term aggregate impact. For example, although the capacity to scale up the manufacturing processes made possible by the Industrial Revolution generated very significant gains in living standards, we are now being forced to reckon with the urgent climate catastrophe that has resulted from our failure to appreciate the destructive impact of burning fossil fuels. We are in danger of making exactly the same kind of devastating mistake as a result of our failure to grasp the novel capacities of networked digital technologies to operate at a speed and scale that vastly exceed the speed and scale of analogue technologies that required active human intervention (Yeung, 2022). In other words, the cumulative impact of micro-conditioning human populations threatens to destroy the very ground upon which freedom stands, bending us to the will of digital architects in ways that could undermine the independent thought that is a precondition of meaningful agency. It is our capacity for individual self-determination which is at stake:
This is the essence of autonomy and human agency. Surveillance capitalism’s ‘means of behavioural modification’ at scale erodes democracy from within because, without autonomy in action and in thought, we have little capacity for the moral judgment and critical thinking necessary for a democratic society. Democracy is also eroded from without, as surveillance capitalism represents an unprecedented concentration of knowledge and the power that accrues to such knowledge. They know everything about us, but we know little about them. They predict our futures, but for the sake of others’ gain. Their knowledge extends far beyond the compilation of the information we gave them. It’s the knowledge that they have produced from that information that constitutes their competitive advantage, and they will never give that up. These knowledge asymmetries introduce wholly new axes of social inequality and injustice. (Shoshana Zuboff as quoted in Laidler, 2019)
What Zuboff calls the ‘right to the future tense’ might be helpfully understood in terms akin to a child’s right to an open future, posited by legal philosopher Joel Feinberg (1980). He argues that parents may not close off certain basic options for their children if doing so would prevent them from adequately developing the capacity for self-governance on reaching adulthood. It is precisely this danger – the loss of our capacity for self-governance – which Zuboff warns of, albeit in relation to entire adult human populations.
Until recently, the cumulative impact that arises from appropriating the personal data at scale has gone largely unnoticed. But by destabilizing the political culture that places respect for self-determination at its core, digital systems are eroding the shared foundations upon which our democracy and freedom depend. Just as we failed to recognize the ‘hidden’ cumulative damage produced from innovations accompanying the Industrial Revolution, we risk failing to attend to the steady corrosion of our democratic and constitutional foundations. Humans are increasingly treated as objects, to be sorted, sifted, scored and evaluated by technological systems in ways at odds with the basic right of all individuals to be treated with dignity and respect. As Julie Cohen puts it ‘citizens have been reduced to raw material – sourced, bartered and mined in a curiously fabricated privatised commons of data and surveillance’ (quoted in Powles, 2015). For Korff and Brown (2013), the way in which technologies are applied to human populations ‘poses a fundamental threat to the most basic principles of the Rule of Law and the relationship between the powerful and the people in a democratic society’. In short, when undertaken systematically, the increasingly widespread and pervasive application of data-driven profiling technologies threatens to destroy the social and moral foundations that are necessary for flourishing democratic societies.
Reclaiming our right to an open future
The Digital Enchantment’s compelling narrative has served as a powerful political tool, skilfully utilized by the technology industry to capture both public sentiment and the minds of policy-makers in the service of their commercial self-interest. However, it is clear that each of the Enchantment’s three tenets cannot withstand critical scrutiny, even of the most superficial kind. We cannot assume that digital technologies will ‘solve’ all our social problems; digital technologies are not magic bullets. They may well damage individuals, communities and our broader environment. Unlike our attitude to new drugs, in which we have long recognized (at least since the Thalidomide disaster in the early 1960s) the need for care, we have been extraordinarily relaxed in our response to new networked digital tools and systems. While a great many of these may not cause no damage, the ‘no ill effects’ doctrine is by no means universal in its application to data-driven technologies. In particular, the data-driven profiling technologies upon which surveillance capitalism relies increasingly threaten the foundations of democratic freedom (Yeung, 2019b). If we are to preserve and nourish those foundations, we must work towards permanently dispelling the hold which Digital Enchantment exerts over the mind of many policy-makers.
Mercifully, recent EU laws together with the European Commission’s legislative proposals for taming the excesses of the digital services industry suggest that the tide might, finally, be starting to turn. But these are early days and abundant nourishment will be required if these young shoots are to grow into effective oversight regimes, particularly in the hostile anti-regulation environment the technology industry works tirelessly to sustain. The relationship between innovation and regulation is far more complex and complicated than the Digital Enchantment would have us believe. We need a fuller, richer account of the relationship among regulation, digital innovation and society if we are to construct the policies needed to sustain the health of the democratic commons. The Digital Enchantment’s portrayal of regulation as the enemy stifling beneficial innovation is false and unacceptably simplistic. Regulation, if well-designed and sensitive to public concerns, may stimulate socially beneficial innovation. For example, legal regulation has played a critical role in fostering innovation in the UK life science industry by establishing and maintaining a legally mandated system of oversight of human biotechnology. Here, regulation plays an important boundary-setting role. Just as children flourish when they are provided with clear boundaries in which to play freely, enabling space for creative yet safe, respectful play, the creativity that lies at the heart of innovation can also flourish when the boundaries of responsible practice are clearly defined and the rationale for them readily accepted.
Even this understanding of regulation setting boundaries for permissible ‘play’ may be too limited; it lacks any account of the place or role of the public. As Cristie Ford (2017) has observed, although the characterization of regulation as boundary-setting reflects welcome recognition that completely unfettered private-sector innovation may not be totally beneficial for the rest of us, it nevertheless portrays the goal of regulation in terms of establishing a set of goals and expectations distinct from those of private innovators. The narrative offers no affirmative account of a public role. In particular, it fails to recognize the absence of hard choices. In this respect, the EU’s proposed AI Regulation appears to fall seriously short. Worryingly, there are serious dangers that these proposed laws may foster and enable the privatization of public governance, in which the task of setting and enforcing regulatory standards is largely delegated to the technology industry itself without meaningful public oversight. At the same time, Ford suggests that this account of regulation as setting boundaries on permissible innovation seems to imply a belief that innovation ‘will lift all boats’, that innovation can ease and perhaps even resolve difficult trade-offs. Her observations remind us that regulatory decisions frequently entail difficult choices. Contrary to the popular portrayal of regulation as technological, regulatory decisions are political through and through. If we are to protect our right to an open future, we must maintain regulatory institutions to ensure that our technologies operate in ways that respect our autonomy, rather than in ways that serve surveillance capitalists and deprive us of authentic human agency.
In other words, our responses to fairy tales need not be singular and simplistic. This paper has portrayed the Digital Enchantment in a deliberately simple, cartoonish fashion, intended primarily as a provocation and wake-up call. But we should not throw the baby out with the bathwater. The challenge is to cultivate a more sophisticated response to the simplistic narrative offered by the Digital Enchantment. In preliterate cultures, fairy tales were vehicles for processing trauma, transmitting ancestral wisdom and debating cultural beliefs, values and norms (Tartar, 2021). Yet recent contemporary fairy tales, packaged for mass consumption, have tended to focus on simplistic social messaging, overlooking complications. For example, Little Red Riding Hood has become a story about ‘stranger danger’ plain and simple, concluding with a little girl promising her mother that she would ‘never again stray from the path’. How much better, Maria Tartar argues, to explore a story in which a little girl finds ways to exploit the wolf’s weaknesses and outsmart him. The challenge is to draw upon the power of the Digital Enchantment’s narrative in ways that challenge us escape worst-case scenarios and to imagine what could be, should be or might be. Only then are we likely to reap the best of human creativity that lies at the heart of innovation in the service of flourishing human communities.