Introduction
When psychologist Daniel Kahneman shared the 2002 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel with Vernon Smith, the pioneer of experimental economics, he was being recognized for his career-long work on human judgment and decision-making. Much of this research was conducted with Amos Tversky, who undoubtedly would have shared the Prize had he not died six years earlier. Their heuristics and biases view of choice and their prospect theory analysis of risk-taking were instrumental in the development of what Sent (2004) calls ‘new behavioural economics’. In Thinking, Fast and Slow, Kahneman (2011) presents a highly readable and engaging account of how and why they did their work and the insights that it generated. It will instantly become a standard source within psychology and behavioural economics and yet will also sell in huge quantities to general readers. Though the latter will find much in Thinking, Fast and Slow on how to make better decisions, the book, as befits its author’s status, is in a different league from the typical airport bookstore bestseller. Indeed, in its discussions of regression towards the mean, it offers an analysis of why it is unwise to put faith in policies recommended in the typical bestsellers, the ones that analyze why particular companies have been spectacularly successful.
Although Thinking, Fast and Slow is going to be influential, it left me with very mixed feelings. It was fascinating and illuminating, but also irritating. The book is not a broad, synthesizing survey of research on the interaction between thinking and choosing. Rather, it is an account of the approach of Kahneman and Tversky and their close associates and influences. Other contributions that offer rival perspectives are ignored or swiftly dismissed (as in the case of Gerd Gigerenzer, whose work on fast and frugal decision rules opposes Kahneman’s focus on the failings of human judgment and is quietly dispatched in an endnote at pp.457–58).
Behavioural economics, old and new
The biggest name to receive minimal coverage is polymath Herbert Simon, who was the 1978 Nobel laureate in economics and the key figure in what Sent (2004) calls ‘old behavioural economics’. Simon challenged the traditional core of economics by offering bounded rationality and satisficing to address the deficiencies he saw in conventional portrayals of decision-makers as globally rational, constrained optimizers. Unlike many young behavioural economists who seem oblivious of Simon’s work, Kahneman is clearly well aware of Simon’s achievements. However, the only one of Simon’s research findings he employs is that what looks like intuitive decision-making by chess masters actually involves applying the ability to match the position of their current game with positions in their extensive memories of past games, and then recall which moves were successful. To one of these uses of Simon’s work on intuitive choices he adds an endnote (p.466) in which he describes Simon as a ‘towering intellect … a forerunner of behavioral economics and, almost incidentally, a Nobel laureate in economics’. In making no other connection to Simon’s work, Kahneman reveals himself to be committed to the mainstream view of rationality. Indeed, much of what he says about how to make better decisions is essentially an exhortation to operate more like statistical decision theorists by taking proper account of probabilities and using them as decision weights.
In the Simon-inspired old behavioural economics, problems are seen as often being so complicated, or so beset by unknowns and unknowable future conditions, that there is no way of computing optimal solutions via deliberative thinking. So, rather than allowing their thought processes to run into the equivalent of a crash in a computer programme, people seek to deal with such problems by selecting rules that seem, in terms of higher-level decision rules, reasonable to apply to the situation at hand. They then see if they can find a satisfactory outcome.
The new behavioural approach fostered by Kahneman begins the other way round, presuming that an optimal choice is waiting to be found. One of the key themes in Thinking, Fast and Slow is that it is useful to see the human mind as involving two decision-making personas, System 1 and System 2. System 1 is a fast thinker and does much of its work at the unconscious level, using its associative memory capabilities to find matches between incoming information and past experiences and thereby generating verdicts about how to respond to changes in the external environment. System 2, by contrast, is a slow thinker that attempts to engage in logical deliberation and is potentially able to say to System 1, ‘Whoa, not so fast. Let’s analyze this suggestion before acting upon it!’. What System 1 comes up with may not conform at all well with what would be rational in terms of conventional choice theory, but System 2 may get the decision-maker closer to the optimal choice by exercising the power of veto and working out a different verdict.
If Kahneman had wished to build a bridge between old and new approaches to behavioural economics, he could readily have done so by changing the balance of his book away from its emphasis on System 1 as the source of most of our mistakes and instead emphasizing the vital role that System 1 plays in helping us cope in a complex world. (Of course, this would have given the book much less chance of becoming a bestseller.) Eventually (p.410), in his concluding chapter, he concedes that the balance could have been different and that System 1 ‘is also the origin of most of what we do right – which is most of what we do’. From an evolutionary standpoint, the heuristics that System 1 drives us to use may be things that we are programmed to use because they enhance our survival prospects (or, at least, because they enhanced the survival prospects of early humans). There are a few points where Kahneman does recognize the evolutionary advantages of some of the bias-inducing heuristics that he identifies (pp.67, 90, 115), and in Chapter 20 he recognizes that the optimism bias helps drives modern capitalism. Generally, though, the focus is on the dysfunctional.
A case in point is ‘priming’. Kahneman is, of course, right to emphasize that our interpretations of situations may be affected by things to which we have just been exposed, despite them being of dubious relevance to making the best decision. However, if all possible interpretations of stimuli had the same probability of being tried, it could take a very long while to find those that seem plausible. There is a much bigger chance of rapidly reaching a workable conclusion if the brain first tries to match sets of neural connections fired up by incoming stimuli with frequently- or recently-activated sets of neural connections, and defines the context in which the cognition is being made. For example, having established in our minds that we are going along a road and having been primed by a road sign, we much more readily understand what we encounter along the road a few seconds later. Yes, priming does make us vulnerable to attempts to manipulate our behaviour, but this is probably a small price to pay for having the ability to think fast.
Two systems, or three?
As well as having concerns about the lack of balance in the book’s coverage, I have a wider reservation concerning Kahneman’s two-systems approach: I think the analysis would be more powerful if it were conducted with the addition of System 3, the thinker’s emotional core. At present, emotional underpinnings of choice seem to be subsumed into System 1, which is allowed to get away with irrational thinking because System 2 is prone to be lazy and has a rather short attention span because of a tendency to suffer from fatigue. System 2 may thus fail to take the trouble to rein in ideas emanating from System 1 that support the brain’s emotional needs, but are far from rational. By contrast, my approach [originally argued in Earl (1986, pp.145–47), but inspired by the analysis of emotions in relation to core constructs in Kelly (1955)] sees the mind as operating rather like a legal system that has multiple layers: an action ruled out of order by one level (System 2) may be admitted on appeal to a higher court (System 3).
Thus, in my three-system view, System 1 is essentially a source of instant ideas for responding to incoming stimuli. These ideas are acted upon so long as neither System 2 nor System 3 vetoes them. However, rather than such ideas getting enacted purely because System 2 is too lazy or too tired to come up with a basis for vetoing them in favour of something else, my suggestion is that System 2’s deliberations could themselves be ruled out of order by System 3 because the alternatives being canvassed by System 2 are at odds with the emotional requirements of System 3. The rational voice of System 2 may be saying, ‘This doesn’t make sense’ or ‘This could be unwise, because …’, but the person goes ahead anyway because the emotional voice of System 3 may be saying ‘If I don’t do this, my core view of myself is compromised’.
Stopping rules and the quality of decisions
Kahneman portrays System 1 as able to generate intuitive judgments instantaneously, seemingly unconstrained by any shortage of computational capacity. This ability arises because it works on what Kahneman calls the WYSIATI (What You See Is All There Is) principle. In other words, System 1’s associative memory and judgment-generating processes will always try to make some sense of the information that is at hand rather than considering what other information might be worth gathering to make better sense of it. Kahneman argues that this results in an unfortunate tendency to substitute the question that is really being faced with a different one that is consistent with the available information. To take better decisions, we must train ourselves to make more use of System 2 and overcome System 2’s tendencies towards laziness. If we keep trying to generate alternative perspectives and then chase up evidence to help choose between them, we can overcome System 2’s inbuilt tendency to fail to go as far as it might beyond the suggestions thrown up by System 1.
The importance of Kahneman’s analysis is evident if one reflects on what often happens when university students take exams or attempt coursework assignments. Many will answer a question as if it is a completely different question from what their lecturer had in mind, while others may recognize the correct question but then answer it superficially because they fail to take the trouble to look for implied sub-questions, or fail to consider whether their answers stand up well logically or empirically. The students’ scores may have nothing necessarily to do with differences in innate or acquired abilities to reason logically or in the amount of relevant material they have at their disposal for constructing answers; rather, the issue may be how determined they are to avoid succumbing to WYSIATI and keep thinking about the question and its possible implications. High scores will go to those who can overcome their System 2’s inherent laziness.
From Simon’s bounded rationality perspective, emphasis on the laziness of System 2 detracts from the question of whether there is actually a rational solution to be found if one is industrious and has a terrier-like tenacity once engaged in deliberative problem-solving. If an optimal choice is inherently elusive and there are many calls on one’s attention, the key issue is whether one has good stopping rules when constructing and experimenting with possible solutions and gathering information. For different contexts, different stopping rules may be efficient: if survival calls for an instant decision to jump out of the way of an approaching threat, intuition may be our best basis for action. However, when the need for fast thinking is not so pressing, the first question is surely whether the problem justifies thinking long and hard: some problems have bigger sets of implications than others, or bigger chances of being better addressed if more of one’s finite attentive capacity is devoted to them. Moreover, thinking harder may simply cause confusion if it results in a bigger set of options to consider or recognition of a wider range of possible outcomes.
Base-rate probabilities, unique events and crucial decisions
In contrast to an old behavioural economist’s emphasis on the scope for making better decisions by achieving a better fit between contexts and decision rules, Kahneman’s message is that decisions are often compromised by failures in statistical inference or failures to look at relevant probabilities. Decision-makers commonly misunderstand how to combine probabilities: if a person has probabilities of being in two categories (e.g. bank employee and feminist activist), they are likely to conclude that the person has a higher probability of being in a combination of these categories than in either one of them (e.g. a bank teller involved in feminist activities, rather than just a bank teller). They will be prone to extrapolate based on their experience with a particular case (as Kahneman himself confesses to having done in estimating how long it would take to complete a partially-developed curriculum design project) rather than seeing what lessons can be derived from a wider pool of cases that come into the same category. They will be susceptible to the planning fallacy, finding it easier to imagine things unfolding according to plan than in line with the combined likelihoods of many possible events that could derail their plan. These cognitive shortcomings can result in overconfidence and failures to abandon projects whose odds for success, if only they were examined, would not look good.
At the heart of Kahneman’s focus on the statistical side of decision-making is the view that good outcomes depend on both judgment and luck. A competent decision-maker avoids succumbing to WYSIATI and errors of statistical inference, and weights risks by their probabilities. However, judgment can only go so far, and this is why one is left with the probabilities: things beyond one’s control or imaginative capacities can affect outcomes, so all one can do is look at the odds of success. If an outcome has a low probability, we will need a lot of luck because it is the kind of outcome in whose way lie many potential barriers.
Kahneman is clearly quite frustrated with the reluctance of some people to accept his view of the importance of taking the trouble to get the best base-rate probability information that is available and use it for decision weights. A lawyer who specializes in medical malpractice cases may have a good idea of the probabilities of winning and the sizes of settlements and yet may make an assessment of whether to try to take a particular case to court (or, particularly in the US, whether to try to win an appropriate pre-trial settlement) in terms of its singular features and his or her assessment of whether it will be possible to beat the relevant odds. Each case may indeed be unique but, from Kahneman’s perspective, it appears that the rational way of deciding how to proceed is to consider whether its special features are such that it can be said to belong to a specific class of cases whose outcome odds are different from the wider set of which it is also a member. A lawyer who fails to weigh outcomes by their probabilities is likely to be suffering from an illusion of control and the planning fallacy.
There is no sign that Kahneman’s own System 2 considers the possibility that those who fail to make the sort of use of probability data that he thinks they ought to be making might rightly question the applicability of the idea of probability to unique cases. Probability may be a useful ingredient in decisions where one is dealing with repeated cases, but not with choices that, as individuals, we make only once or at most a few times. Such choices may be what Shackle (1961, 1972) calls ‘crucial decisions’; that is, decisions where we are aware that what we select could result in major changes, for good or bad, in the set of opportunities open to us. If one marriage in three fails, it may be wise to evaluate one’s prospective marriage partner (and oneself) carefully and to study the frequency of particular causes of martial breakdown before deciding to get wed. However, for Shackle, it does not follow that the decision should then be made by weighing together rival possible outcomes according to their probabilities for the population as a whole.
Recognition of such decisions – which are not to be seen as necessarily the same as choices whose outcomes may be shaped by rare events (such as the risk of being hit by a tsunami) that Kahneman discusses in Chapter 30 of this book – led Shackle to propose an alternative to expected utility theory many years before Kahneman and Tversky developed prospect theory from elements of their empirical work. It is a pity that Kahneman gives no space in Thinking, Fast and Slow to Shackle’s perspective, and not merely because Shackle provided a non-probabilistic way of making sense of how decisions can be taken when people envisage ranges of possible outcomes for rival schemes of action. Shackle’s theory is also relevant because it predates prospect theory in its use of a reference point that divides outcomes into gains and losses. While Shackle’s model does not include the S-shaped utility function that is central to prospect theory, it does provide a way of understanding how decision-makers avoid being overwhelmed when comparing alternatives to which they attach ranges of possible outcomes: Shackle’s model predicts that the decision-maker’s attention will focus on a single gain outcome and a single loss outcome for each proposed scheme of action from the range of outcomes that have been deemed possible, whereas Kahneman simply presents us with either/or probabilities instead of considering situations involving complex probability distributions.
The wider significance of reference points and loss aversion
Although most of Thinking, Fast and Slow deals with choices that take the form of a bet, Kahneman sees loss aversion and prospect theory’s S-shaped utility function as having wider significance. In Chapter 27 (‘The endowment effect’) he takes issue with the way that economists normally depict preferences in terms of indifference maps, arguing that such a view of preferences ignores the possibility that the chooser’s present situation will serve as a reference point. As is typical in the literature on the endowment effect, the exposition focuses on tradeoffs between goods (or leisure) and money (or income). Indifference maps portray decision-makers as if they are willing to substitute between goods (or between goods and money/income) at a decreasing marginal rate, with the initial situation having no path-dependent consequences. Thus, if I say that I am willing to pay no more than $6 to buy a mug and am then given such a mug, I should be perfectly happy to give it back in exchange for $6. However, in practice, I am likely to display the endowment effect: once I own the mug, I am likely to want to keep it unless I am offered somewhat more than $6. This violates the thinking behind indifference maps: the mug should either be worth at most $6 to me, or it should not.
If the endowment effect holds in choices between monetary amounts and things that can be bought with money, it ought also to be found in choices between different bundles of goods or between products that comprise different bundles of characteristics. For example, imagine a consumer who is considering acquiring a new vehicle to replace her SUV. Her reference point is her SUV, which is very spacious, but has dreadful fuel economy. She would like to get something more economical, but has a strong aversion to losing spaciousness. If this consumer is to sacrifice some degree of spaciousness, she will, from Kahneman’s standpoint, require a far bigger compensating gain in fuel economy than conventional thinking might lead us to expect. Resistance to downsizing may also be caused by perceived losses in safety being likewise assigned disproportionately high values. Her attitude might be quite different if the reason for considering a new vehicle was that the SUV had been written off and she had rented a smaller vehicle pending working out what the SUV’s replacement would be. Having become accustomed to living with the smaller vehicle and found that space was not really a problem, she might now use the rental vehicle’s economy as her reference point: any new vehicle offering lower fuel economy would then need to offer spectacular gains in some other respect in order to be selected.
An extreme form of loss aversion would be where the consumer chooses by using a decision rule that specifies minimum acceptable levels of performance based on levels previously attained; that is, a ‘no going backwards’ checklist. Such decision rules may be cognitively simpler routes to taking decisions than those that involve trading off gains and losses. Payne et al. (1993) find that consumers tend to switch into employing such intolerant ways of taking decisions when they face many alternatives that differ across many dimensions. Kahneman never seems to acknowledge the possibility that multi-attribute decisions could be made in ways other than by working out overall expected utility scores, any more than he considers the use of safety-first principles for taking risky decisions. Prospect theory allows for values to be twisted, but it is entirely within the economics mainstream in presuming that, in principle, there will always exist a gain prospect big enough to make an associated loss prospect worth tolerating.
Conclusion
As an exposition of the evolution of Kahneman’s research and his main findings, Thinking, Fast and Slow is a superb piece of work. Those who have not previously come across the heuristics and biases approach or prospect theory will probably be quite amazed at the extent of human failings in decision-making, and can also expect to learn much about how to avoid common errors. However, the book can also be read as a self-promoting work that fails to try to build bridges with significant works on thinking and choosing that make much more radical departures from rational choice theory and yet end up with less of an emphasis on errors. I can understand Kahneman being unaware of Shackle’s alternative to expected utility theory, but his treatment of Simon seems strategic. Having had his work accepted into the economics mainstream by the new behavioural economists, Kahneman was in a strong position to open up the field by emphasizing that it was Herbert Simon who won the first Nobel award in economics for taking a behavioural approach to decision-making, and by considering upfront where his and Simon’s approaches overlap, where they differ, and how they might both be allocated attention within a pluralistic approach to choice.