The amusingly ambiguous title of Stuart Ritchie’s book might briefly suggest to science deniers that it supports their belief that science is just one more ideology, promoted by authoritarian governments and sinister, wealthy elites. An actual reading of the work will disabuse them of this notion. Science Fictions is unequivocal in its regard for the achievements and importance of science and sets out with the primary aim of improving the way it is done. Given the extraordinary benefits society has enjoyed from science over many centuries, no rational person could argue with this goal, and we can only applaud the way in which Ritchie goes about conveying such an important, and often technical, subject in language accessible to the general reader. Along with many examples of the failings of scientists in recent years, Science Fictions offers an explanation for these, and proposes some methods for doing science better. Some of the fixes are aimed at error prevention during experiment design, data collection and peer review; others are aimed at detecting errors in research already published.
It was in the section on causes and cures that I began to have misgivings. Ritchie proposes one primary cause for scientists going astray – perverse incentives. In essence, scientists are diverted from Mertonian principles by the powerful need to earn a living. This makes sense, because it’s exactly the same problem that confounds people in every other area of life. Volkswagen’s engineers are probably decent people who cheated ‘because they couldn’t find a technical solution within the company’s “time frame and budget”’ (Goodman, 2015). Very few big pharma executives can really be evil enough to harbour a secret dream of poisoning an entire community. The question, then, is what does it take to make an essentially good person do something wrong, and how do we prevent this happening? Ritchie seems to sense a disconnect between his primary cause and the cures he proposes. He clearly worries that the fixes will be insufficient, scientists will find ways around them and the errors will continue. He is probably right. His answer is to propose a second more profound cause, that misbehaviour is in our very nature. This struck me as odd. If misbehaviour really is embedded in a scientist’s nature, then there is no technical solution. It’s going to take a long period of evolution to fix that. Meanwhile, we are rearranging the deckchairs.
But I was suspicious for another reason. Human nature tends to be a favourite whipping boy in political matters. It appears in the dock when politicians seek the villain for some piece of social misbehaviour. It is a handy counter to proposals for social changes that might improve the distribution of wealth. There’s no point handing them money because they’d smoke, drink or gamble it away; it’s in their nature. If you give people money, they won’t work at all, they’ll turn into couch potatoes. I began to wonder if human nature made its tentative appearance in Science Fictions as a secondary culprit in order to avoid searching for the fundamental solution to the problem of perverse incentives. This more fundamental solution – and I believe the more obvious one if we aren’t trying to avoid it – lies in our current grossly inequitable distribution of money and resources and the power that accrues to controlling these.
The fundamental problem of perverse incentives
The nearer they get to their treasure the farther they get from the law! (Poster for The Treasure of the Sierra Madre, 1948)
Akira Kurosawa tells us how his path to becoming a film director – at age 25 – began with answering an advertisement by a Tokyo film company to recruit assistant directors. Candidates were asked to submit an essay that proposed solutions for ‘the fundamental deficiencies of the Japanese film industry’:
The theme of fundamental deficiencies and the ways to overcome them gave me something I could sink my teeth into, and at the same time it appealed to my perverseness and sense of mischief. If the deficiencies were fundamental, there was no way to correct them. (Kurosawa, 1983, ch.28, par. 13)
I was reminded of the anecdote as Stuart Ritchie sank his teeth into the failings of the science community over the past few decades, and particularly his section on causes and cures. Kurosawa’s mischievous response is worth bearing in mind when solutions are proposed to deep systemic problems.
David Hume (1739) observes that nothing can be contrary to truth or reason except what has a reference to it. We need science and we need those references. But we seem to be moving backwards. The internet delivers an exponentially greater volume of unreferenced, unverified misinformation than has been possible at any other time in history. Our world has never before enjoyed (if that is the right word) the benefit of so many pseudo experts. Web pundits offer the latest news event, conspiracy theory or impending natural disaster, with copious citations of other unverifiable web pundits, unreferenced web pages and misunderstood and misrepresented scientific papers. The worldwide web of nonsense quickly entangles the unwary and uninformed. Complex political questions and their resolutions can now be summed up in three-word slogans. Public exposure and recognition, celebrity and personal branding are the key to wealth. More worryingly, in parallel with our increasing capacity to consume rubbish, comes a decrease in our capacity to appraise concepts containing any kind of subtlety or complexity. Science television programmes are required to have an undemanding script with a handful of facts constantly repeated, interspersed with suitably entertaining diversions, and delivered by actors or celebrities familiar from the world of entertainment. A science series like Jacob Bronowski’s landmark The Ascent of Man (hugely popular in 1974) would be likely to stretch the limited patience of most current viewers.
Public respect for science and scientists has not been increased by a number of recent, high-profile cases where scientists have been shown as capable of outright dishonesty and fraud, or where apparently simple mistakes have been discovered in data. One of Science Fictions’ examples involves Daniel Kahneman, a highly regarded psychologist, Nobel Prize winner and best-selling author of such popular science books as Thinking Fast and Slow. Some six years after the publication of that book, Kahneman was obliged to admit that, while discussing a psychological effect known as ‘priming’, he had placed too high a degree of confidence in previous research which had later failed attempts at replication (p.28). Kahneman’s honest admission demonstrates the degree to which science depends on every single stage in the long process of knowledge accretion being done reliably, but is also self-correcting. The best way of confirming a claim based on experiment is to repeat the experiment, and what became known as the ‘replication crisis’(p.25) arose when a series of experimental results failed subsequent attempts to repeat them.
The litany of science’s failures in the second part of the book makes depressing reading. As the book’s subtitle promises, we have a catalogue of fraud, error, deliberate misrepresentation and hype. And yet the account of each case also makes it clear that, when scientists are exposed as biased, mistaken or cheating, it has not been science deniers or conspiracy theorists who have identified the problem, but good science. After acknowledging the significant failures, Ritchie sets out to explain what good science is, how to detect bad science, to defend the scientific method and to make proposals for improving how it is done. The book also includes an appendix with an excellent guide for the non-expert on how to read a scientific paper.
I work all night, I work all day to pay the bills I have to pay. Ain’t it sad? (ABBA, Money, Money, Money lyrics)
It quickly becomes clear that perverse incentives often explain why scientists take a wrong turn. The right motivation for doing something is overpowered by the wrong one. Scientists are incentivized to ‘follow the money’ and getting published is almost the only viable route to a job, and career advancement. Applications for lecturing positions have become dependent on the length of a CV and especially a long list of published papers. In some cases, institutions offer a cash reward for publication. The drive to feed the dysfunctional (but highly lucrative) journal publication system has been compounded by the recent innovation of ‘thesis by publication where, in certain disciplines, the traditional model of a doctoral thesis submitted to one’s peers (as a single document) has been replaced by the option to have a specified number of articles accepted by journals within a given timeframe. This system strongly tempts scholars to churn out snippets of research that fit current or topical interests in order to be published. It also invites the problem Ritchie refers to as ‘salami slicing’, where a modest piece of research is repurposed into several papers covering the same ground. The substitute for quality control has become the impact factor of the journal. Quality has become quantity.
Ritchie asks why people ‘who became scientists for the love of science and its principles end up behaving so badly?’ (p.176). He immediately supplies the answer. It is because studies that report ‘positive, flashy, novel, newsworthy results are rewarded so much more than others’ (p.176). Career-making or -breaking decisions ‘are based in no small part on how many publications you have on your CV, and in which journals they’re published’ (p.178). This ‘engenders an obsession not just with certain kinds of papers, but with publication itself. The system incentivises scientists not to practise science, but simply to meet its own perverse demands’ (p.177).
One’s first thought is to question whether this incentive is really so perverse. It may not be our ideal motivation for pursuing a noble career in research, but is it not the same monetary incentive that drives everything else in our society? We are immersed in a constant quest for more income – or for celebrity, which amounts to the same thing. A million entertainment programmes on television centre on people competing for monetary prizes. Society pays homage to the wisdom of business people who have accumulated a few billion dollars, to the extent of proposing them as candidates for senior political office. The few tales of failure in Hollywood comedies retail the horror of people losing a lavishly paid executive job with its associated healthcare programme. Success is perfectly equated with money. It’s hard to see why we should expect scientists to respond differently from everyone else on the planet, even if they could. Scientists face the same financial pressures, incentives and demands as everyone else. One definition of perverse is unreasonable but, as Hume (1739) points out, there is nothing demonstrably contrary to reason about pursuing one’s own or one’s family’s interests.
The last temptation is the greatest treason; to do the right deed for the wrong reason. (Eliot, 1965, p.52)
We see motivations as either intrinsic or extrinsic. An intrinsic motivation for doing science might be the profound personal satisfaction from producing some insight or discovery of substance, large or small. An extrinsic motivation may take the form of personal career advancement or status, grant success, promotion or public approbation, and these all lead by some path to an increase in personal wealth. Extrinsic motivation may also be a threat of losing a position, or a barrier to obtaining one. In a 2013 Guardian interview, Peter Higgs doubted whether the kind of insight he produced with Higgs Bosun would be likely in the current academic culture, where the motivation is a constant pressure to ‘keep churning out papers’.
Higgs said he became ‘an embarrassment to the department when they did research assessment exercises’. A message would go around the department saying: ‘Please give a list of your recent publications.’ I would send back a statement: ‘None’. (Aitkenhead, 2013)
Higgs is pessimistic about his chances of getting a similar opportunity now: ‘Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.’ Extrinsic motivation thus appears as an incentive to be ‘productive’ using a measure and a policy defined by the current entity in power. The negative effect of extrinsic motivation on the quality of scientific research seems to be mirrored in its negative effect on creativity. Offering people a cash bonus for a higher level of creative input often produces the opposite result (Lepper and Greene, 1978; Amabile, 1998). This is because creativity happens when people are doing something for those deeper personal reasons. These are the ‘Mertonian virtues of universalism, communalism, disinterestedness and organised scepticism in honest pursuit of truth and knowledge about the world’ (p.235). As Ernst Chain wrote, some years after the penicillin invention:
I became interested immediately in penicillin after seeing Fleming’s paper, not because I hoped to discover a miraculous drug for the treatment of infection … but because I thought it had great scientific interest … if I had been working, at that time, in aim-directed scientific surroundings, say in the laboratory of a pharmaceutical firm, it is my belief that I would never have obtained the agreement of my bosses to proceed with my project … we obtained a result of great practical importance without setting out with the aim of achieving it. (Chain, 1980, p.23)
The picture that emerges from every case in Science Fictions is that scientists are tempted to play a publication game, to finesse the rules or to cheat outright, either to secure employment or to achieve a substantial boost to their status and, by extension, their income. The perverse incentives all turn out to be money and it would seem, logically, that the obvious solution is to fix the money problems. Ritchie – perhaps rendered darkly pessimistic by his catalogue of scientific misbehaviour – decides to look at something more fundamental:
Financial incentives aside, let’s not forget the role of human nature: people have a natural tendency to compete intensely for status and credit, to collect reputation-burnishing achievements, and to work towards even objectively meaningless targets – in this case, a large number of publications and grants. (p.180)
This is surely to confuse cause with effect. Whether it’s in their nature or not, scientists are forced to compete for a limited number of jobs and grants. Their future and livelihood have been made to depend on achieving meaningless targets. One immediately thinks of the standard quip that academic politics is so nasty because the stakes are so small. If I have the power to offer ten career-making grants to 100 scientists, I have introduced the element of competition. Further, the topics I set down for allocating my grants become the menu for young researchers’ choice of research topics. This is called power.
Human beings may indeed have some inclination to compete – we all have egos – but we also collaborate. The story of evolution suggests that when we collaborate (in general), we do better. A few sociopaths aside, scientists do not have a natural tendency or inclination to compete intensely. If anything, the story of modern science is one of intense collaboration. It is true there are academic areas outside pure science (such as business, politics, postmodern literary theory or social science) where competition for status often depends on marketing a personal theory and ‘owning’ its terminology. I once came across a paper at a social science conference actually proposing that social science academics should apply for trademark registration to protect their (largely bullshit) theory labels. But real science cannot work like that. The range of expertise required to address the complexity of any topic demands that papers have contributions from large numbers of authors. As Nature Index put it in 2018:
As big science gets even bigger, its scale is increasingly reflected in author lists on scientific papers. More and more experiments are relying on the work of as many people as there were nights for Sheherezade to spin her many tales.
Government policy on allocating grants can actively work against good science. The pressure of current EU policy towards procuring intellectual property rights on funded stem cell projects and maintaining an element of secrecy is ‘liable to frustrate scientiﬁc research and stem cell science as a co-operative venture for the public good’ by prioritizing patenting over publication, obliging scientists to withhold their work from publication and minimizing the reporting requirements between collaborators on a scientific project (Plomer, 2017, p.229).
A deluge of perverse incentives
The fixes Ritchie proposes are arranged under his subheadings of fraud, bias, negligence and hype. Institutions, he suggests, should not be allowed to self-investigate incidents of apparent fraud, and should be discouraged from defending clearly questionable research. Journals should accept some proportion of papers that replicate previous research or produce null results. Journals might be set up whose sole mission is to replicate and confirm previous findings. On a technical level, we need better algorithms – some automated – for identifying plagiarism. To combat p-hacking, we need to review the conventional basis for statistical significance and remove some of its ad hoc flexibility. Clinical studies should be required to pre-register plan and research intentions prior to data collection. These proposals seem well considered, appropriate and would undoubtedly be effective to some degree. But it is clear that, deep down, Ritchie is still worried that his list of improvements may be too superficial:
Statistics alone cannot solve the underlying problem: the crooked timber of human nature and, by extension, that of the scientific system. No matter which statistical perspective becomes dominant, some scientists would find ways to game it … the solutions to these problems will have to be motivational and cultural. (p.208)
If the scientific system really is being constructed from the warped timber of corrupt humanity, then we are stuck with Kurosawa’s fundamental problem, which has no solution. However, the jury is still out on human nature, part of a longstanding debate opposing Hobbes with Rousseau. Rutger Bregman’s Humankind (2020) comes to a far more positive conclusion about us. Apart from the poetry and the art, could we really have achieved what we have over a thousand years – would science have made even the progress it has – if the human raw material were so debased? Science Fictions notes that Zimbardo and his Stanford prison experiments have largely been discredited. Stanley Milgram’s experiment, mentioned by Ritchie and Bregman, has been the subject of much discussion and replication over the years, but disparate conclusions about human nature may be drawn from it. We might be innately evil, but perhaps we just have an unfortunate – but curable – tendency to place too much trust in authority and wealth – English voters’ obsessive faith in the ultra confident assertions of the graduates of Eton is a case in point. Or perhaps it indicates our profound willingness to cooperate with others once we are convinced (wrongly or rightly) that the project is of great value. As Bregman (2020) puts it:
if you push people hard enough, if you poke and prod, bait and manipulate, many of us are indeed capable of doing evil. The road to hell is paved with good intentions. But evil doesn’t live just beneath the surface; it takes immense effort to draw it out. And most importantly, evil has to be disguised as doing good. (part 2, section 8: 4, par. 21)
In any case, it is surely premature to seek in human nature the fundamental problem when every example in Science Fictions reveals that science is being corrupted by a system built on financial incentives:
If we can train a new generation of researchers to aim for the Mertonian norms, while at the same time holding back the deluge of incentives that push them in the opposite direction, we might be able to save science from itself. (p.235)
‘All we need to fix science’, Ritchie concludes, ‘is to give people the right motivation’ (p.234). But intrinsic motivation is not something we can give people. By definition, theource is within us; either one has it or one does not. If we are to halt the deluge of extrinsic financial incentives, we have to go to their source. Ritchie’s example from climate science research, when it disturbs the sensitivities of the current political administration, indicates where this is:
In recent years, the field has come under a particularly subtle kind of attack, where the language of science reform is co-opted as a political move. In 2019, the US Department of Agriculture announced to its researchers that they had to add a statement to every piece of research noting that it was ‘preliminary’… nobody thinks this policy was driven by an innocent desire to improve people’s interpretation of research … much of the work done at the Department of Agriculture involves climate change, and the results are often inconvenient for an administration that’s as pro-fossil-fuels as that of Donald Trump. (pp.243–4)
Decisions about which areas to research, how to do the research and who benefits are not taken by the scientists, but by those controlling and fine-tuning the incentives or disincentives. They have the power. James Mill defined power as security for the conformity between the will of one man and the acts of other men. One reliable way of securing conformity is to control our means of life or freedom. We do not have to go far back in history to find a ban on the publication of texts in support of the heliocentric model of our solar system, enforced by violence. Galileo was brought before the Inquisition and made to recant his defence of the theory. He was not tortured; he didn’t have to be:
He was only threatened with torture, twice. His imagination could do the rest. That was the object of the trial, to show men of imagination that they were not immune from the process of primitive animal fear that was irreversible. (Bronowski, 1976, p.216)
Present-day deterrents are a little less transparently violent, but no less effective in inducing fear. Scientists in pursuit of Mertonian principles will find themselves similarly discouraged from following lines of research that conflict with the tenets of those controlling or influencing the courts, the police, the military and the banks. They will be encouraged, or incentivized, to follow the right ones. What we term ‘Western’ science may seem to have greater freedom in choice of research field, but is constrained by a more subtle degree of shaping and tuning. Universities are formal national institutions requiring policy oversight, administration, funding and resources within a wider national context of political and fiscal ideologies. Decisions about policy and grants are just one example. Wealthy donors endow chairs for scholars prepared to spend years of their lives immersed in some pet theory of their benefactor. Tenured positions are more easily obtained by academics who choose an approved political stance, or avoid politics altogether. Noam Chomsky has argued that the quest to find safe uncontroversial territory, hidden from the vigilance of power in the cloisters of academia, largely explains postmodern theorizing and its arcane linguistic exercises. Pharmaceutical corporations influence the setting up of schools and departments within universities and provide bursaries for young researchers to complete a PhD by testing a potentially lucrative drug. Scientists must queue for funding from charities and agencies that are drip-fed from central coffers, in the process wasting research months on bureaucratic form-filling exercises whose major purpose is to enforce discipline, distract capable minds with ‘busy work’ and delineate the boundaries of power. It is at best inefficient and at worst dangerous to allow even the best-intentioned billionaires and corporations to determine the trajectory of scientific research.
Fine-tuning the rules and methods of science itself is important, but won’t address a problem rooted in the socioeconomic system. It is the same problem that appears outside science in every area of our lives. It affects civil servants, health workers, journalists, business and politics. Money is at the root as a source of power to govern speech or modify behaviour. A major part of the solution must involve an equally fundamental restructuring of the monetary and resource distribution system. Ritchie correctly identifies the problem for science as the power and control of financial incentives, but doesn’t go looking in that system for the solutions. Allow me to suggest a couple of ideas.
One proposal that has come to the fore in recent years, and that would seem likely to have some impact on the incentives problem, is the idea of a universal basic income (UBI), or guaranteed income for every citizen, based on the average requirement to cover such necessities as food and shelter. Many variants have been proposed and the discussion about their respective merits or deficiencies has been taking place for some years in public forums. A number of pilot projects have been completed, some are currently under way and more are planned. These will provide us with better data on methods and outcomes. Nevertheless, it is reasonable to speculate about some of the possible effects that a more distributive system of capital might have on science research.
When our essential needs are met and a secure foundation is provided, considerable uncertainty is dispelled. A guaranteed income would instantly reduce much of the pressure and anxiety for young researchers, freeing them from the immediate fear of being unable to find employment, or of being trapped in a series of temporary contracts. It might allow many more clever researchers from moneyless families to avoid being driven into routine lab work for commercial drug firms. Researchers would have the breathing space to be more selective in their preferred fields of research, to give greater attention to their experiments and to feel less compelled to churn out papers. They would feel the element of competition for status with their colleagues ameliorated, and perhaps be drawn more towards mutual and productive collaboration. The temptation to finesse experimental results to be published must surely be lessened. People with less urgent concerns about salary might be able to support a period when they could postpone remuneration or work pro bono to share in some promising or charitable enterprise. Generally speaking, being less under the thumb of authority, scientists would be more free to voice their opinions on science and public affairs.
Science is also deeply affected by the fact that young adults with no inheritance, and not funded by a wealthy family, have no access to a lump sum for property ownership, to pursue a substantial programme of education or to invest in any significant enterprise at an early stage in their career. Given the problem of constantly increasing inequality and extreme concentration of wealth, many may never be in a position to make these investments. To address this serious problem for society, proposals have been formulated for a universal capital endowment scheme in addition to a basic income. One ambitious version of this scheme (Piketty, 2020, p.983) would deploy progressive wealth and inheritance taxes to recycle society’s capital and fund a lump sum endowment for every young adult of about 60% of average adult wealth. The particular benefits of this for science should be obvious. Rather than going cap in hand to a bureaucratic government department or foundation, researchers would have an immediately useful amount of capital under their control. They might be able to use it alone as a way of funding a modest project, or apply it to a cooperative research venture where substantial funds were required. If we posit an average adult wealth of €300,000, then ten researchers might together have access to a research fund of around €2 million on top of their individual guaranteed basic income. Still relatively modest, but a step in the right direction. Instead of a small number of tightly moderated and approved research projects, we might have a multitude of diverse paths better able to exploit human creativity.
Moderately radical moves, such as basic income and capital recycling, are unlikely to remove, at a stroke, the influence of the current concentrations of wealth in our society or the power that comes through controlling legal, economic and fiscal policy in our ownership societies. We can’t assert that a universal basic income or a capital endowment scheme would resolve the problem of scientists misbehaving. We can at least say that, while science is at the mercy of interests controlling a grotesque preponderance of our shared resources, and the income gap between the poorest and wealthiest is constantly increasing, the current system is less and less likely to work for the general benefit. Since perverse financial incentives are clearly the problem, and are a product of a fundamental socioeconomic system, fundamental alternatives must be considered. Science Fictions makes an excellent case that the problem is money, but might have looked more deeply for possible solutions to that problem. Kurosawa would almost certainly agree.