Thinking in Bets
von Annie Duke
The result of each hand provides immediate feedback on how your decisions are faring. But it’s a tricky kind of feedback because winning and losing are only loose signals of decision quality. You can win lucky hands and lose unlucky ones. Consequently, it’s hard to leverage all that feedback for learning.
Thinking in bets starts with recognizing that there are exactly two things that determine how our lives turn out: the quality of our decisions and luck. Learning to recognize the difference between the two is what thinking in bets is all about.
Why are we so bad at separating luck and skill? Why are we so uncomfortable knowing that results can be beyond our control? Why do we create such a strong connection between results and the quality of the decisions preceding them? How can we avoid falling into the trap of the Monday Morning Quarterback, whether it is in analyzing someone else’s decision or in making and reviewing the decisions in our own lives?
resulting is a routine thinking pattern that bedevils all of us. Drawing an overly tight relationship between results and decision quality affects our decisions every day, potentially with far-reaching, catastrophic consequences.
When I consult with executives, I sometimes start with this exercise. I ask group members to come to our first meeting with a brief description of their best and worst decisions of the previous year. I have yet to come across someone who doesn’t identify their best and worst results rather than their best and worst decisions.
It sounded like a bad result, not a bad decision. The imperfect relationship between results and decision quality devastated the CEO and adversely affected subsequent decisions regarding the company.
Hindsight bias is the tendency, after an outcome is known, to see the outcome as having been inevitable. When we say, “I should have known that would happen,” or, “I should have seen it coming,” we are succumbing to hindsight bias.
In the exercise I do of identifying your best and worst decisions, I never seem to come across anyone who identifies a bad decision where they got lucky with the result, or a well-reasoned decision that didn’t pan out. We link results with decisions even though it is easy to point out indisputable examples where the relationship between decisions and results isn’t so perfectly correlated.
Incorrectly interpreting rustling from the wind as an oncoming lion is called a type I error, a false positive. The consequences of such an error were much less grave than those of a type II error, a false negative. A false negative could have been fatal: hearing rustling and always assuming it’s the wind would have gotten our ancestors eaten, and we wouldn’t be here.
I particularly like the descriptive labels “reflexive mind” and “deliberative mind” favored by psychologist Gary Marcus.
The differences between the systems are more than just labels. Automatic processing originates in the evolutionarily older parts of the brain, including the cerebellum, basal ganglia, and amygdala. Our deliberative mind operates out of the prefrontal cortex.
In addition to everything else he accomplished, John von Neumann is also the father of game theory. After finishing his day job on the Manhattan Project, he collaborated with Oskar Morgenstern to publish Theory of Games and Economic Behavior in 1944.
Game theory was succinctly defined by economist Roger Myerson (one of the game-theory Nobel laureates) as “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.”
Once the game is finished and you try to learn from the results, separating the quality of your decisions from the influence of luck is difficult.
But life is more like poker. You could make the smartest, most careful decision in firing a company president and still have it blow up in your face. You could run a red light and get through the intersection safely—or follow all the traffic rules and signals and end up in an accident. You could teach someone the rules of poker in five minutes, put them at a table with a world champion player, deal a hand (or several), and the novice could beat the champion. That could never happen in chess.
have to learn from the results of our decisions. The quality of our lives is the sum of decision quality plus luck. In chess, luck is limited in its influence, so it’s easier to read the results as a signal of decision quality.
Resulting, assuming that our decision-making is good or bad based on a small set of outcomes, is a pretty reasonable strategy for learning in chess. But not in poker—or life.
If we buy a house, fix it up a little, and sell it three years later for 50% more than we paid, does that mean we are smart at buying and selling property, or at fixing up houses? It could, but it could also mean there was a big upward trend in the market and buying almost any piece of property would have made just as much money. Or maybe buying that same house and not fixing it up at all might have resulted in the same (or even better) profit. A lot of previously successful house flippers had to face that real possibility between 2007 and 2009.
Admitting that we don’t know has an undeservedly bad reputation. Of course, we want to encourage acquiring knowledge, but the first step is understanding what we don’t know.
This is true in any field. An expert trial lawyer will be better than a new lawyer at guessing the likelihood of success of different strategies and picking a strategy on this basis. Negotiating against an adversary whom we have previously encountered gives us a better guess at what our strategy should be. An expert in any field will have an advantage over a rookie. But neither the veteran nor the rookie can be sure what the next flip will look like. The veteran will just have a better guess.
Start-ups have very low chances of succeeding but they try nonetheless, attempting to find the best strategy to achieve the big win, even though none of the strategies is highly likely to create success for the company. This is still worthwhile because the payoff can be so large.
That happened, though, because when you are betting, you have to back up your belief by putting a price on it. You have to put your money where your mouth is.
By treating decisions as bets, poker players explicitly recognize that they are deciding on alternative futures, each with benefits and risks. They also recognize there are no simple answers. Some things are unknown or unknowable. The promise of this book is that if we follow the example of poker players by making explicit that our decisions are bets, we can make better decisions and anticipate (and take protective measures) when irrationality is likely to keep us from acting in our best interest.
In most of our decisions, we are not betting against another person. Rather, we are betting against all the future versions of ourselves that we are not choosing.
We are betting that the future version of us that results from the decisions we make will be better off. At stake in a decision is that the return to us (measured in money, time, happiness, health, or whatever we value in that circumstance) will be greater than what we are giving up by betting against the other alternative future versions of us.
This is ultimately very good news: part of the skill in life comes from learning to be a better belief calibrator, using experience and information to more objectively update our beliefs to more accurately represent the world.
“Findings from a multitude of research literatures converge on a single point: People are credulous creatures who find it very easy to believe and very difficult to doubt. In fact, believing is so easy, and perhaps so inevitable, that it may be more like involuntary comprehension than it is like rational assessment.”
Two years later, Gilbert and colleagues demonstrated through a series of experiments that our default is to believe that what we hear and read is true. Even when that information is clearly presented as being false, we are still likely to process it as true. In these experiments, subjects read a series of statements about a criminal defendant or a college student.
In fact, questioning what you see or hear can get you eaten. For survival-essential skills, type I errors (false positives) were less costly than type II errors (false negatives). In other words, better to be safe than sorry, especially when considering whether to believe that the rustling in the grass is a lion. We didn’t develop a high degree of skepticism when our beliefs were about things we directly experienced, especially when our lives were at stake.
We might think of ourselves as open-minded and capable of updating our beliefs based on new information, but the research conclusively shows otherwise. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.
Whether it is a football game, a protest, or just about anything else, our pre-existing beliefs influence the way we experience the world. That those beliefs aren’t formed in a particularly orderly way leads to all sorts of mischief in our decision-making.
The concept of “fake news,” an intentionally false story planted for financial or political gain, is hundreds of years old. It has included such legendary practitioners as Orson Welles, Joseph Pulitzer, and William Randolph Hearst.
“Furthermore, people who were aware of their own biases were not better able to overcome them.” In fact, in six of the seven biases tested, “more cognitively sophisticated participants showed larger bias blind spots.” (Emphasis added.) They have since replicated this result.
(When it comes to predictions, the plausible range of outcomes would also be tighter when there is less luck involved.) The less we know about a topic or the more luck involved, the wider our range.
We have the opportunity to learn from the way the future unfolds to improve our beliefs and decisions going forward. The more evidence we get from experience, the less uncertainty we have about our beliefs and choices. Actively using outcomes to examine our beliefs and bets closes the feedback loop, reducing uncertainty. This is the heavy lifting of how we learn.
We are good at identifying the “-ER” goals we want to pursue (better, smarter, richer, healthier, whatever). But we fall short in achieving our “-ER” because of the difficulty in executing all the little decisions along the way to our goals.
Outcomes don’t tell us what’s our fault and what isn’t, what we should take credit for and what we shouldn’t. Unlike in chess, we can’t simply work backward from the quality of the outcome to determine the quality of our beliefs or decisions.
Self-serving bias is a deeply embedded and robust thinking pattern. Understanding why this pattern emerges is the first step to developing practical strategies to improve our ability to learn from experience. These strategies encourage us to be more rational in the way we field outcomes, fostering open-mindedness in considering all the possible causes of an outcome, not just the ones that flatter us.
Whether it is a poker hand, an auto accident, a football call, a trial outcome, or a business success, there are elements of luck and skill in virtually any outcome.
Not only are all those other people’s outcomes plentiful, they are also free (aside from any ante). When a poker player chooses to play a hand, they are putting their own money at risk. When a poker player is just watching the game, they get to sit back while other people put money at risk. That’s an opportunity to learn at no extra cost.
That’s schadenfreude: deriving pleasure from someone else’s misfortune. Schadenfreude is basically the opposite of compassion.
Lyubomirsky noted, however, that “the general conclusion from almost a century of research on the determinants of well-being is that objective circumstances, demographic variables, and life events are correlated with happiness less strongly than intuition and everyday experience tell us they ought to be. By several estimates, all of these variables put together account for no more than 8% to 15% of the variance in happiness.” What accounts for most of the variance in happiness is how we’re doing comparatively.
Think of it like a ship sailing from New York to London. If the ship’s navigator introduces a one-degree navigation error, it would start off as barely noticeable. Unchecked, however, the ship would veer farther and farther off course and would miss London by miles, as that one-degree miscalculation compounds mile over mile. Thinking in bets corrects your course. And even a small correction will get you more safely to your destination.
A growing number of businesses are, in fact, implementing betting markets to solve for the difficulties in getting and encouraging contrary opinions. Companies implementing prediction markets to test decisions include Google, Microsoft, General Electric, Eli Lilly, Pfizer, and Siemens. People are more willing to offer their opinion when the goal is to win a bet rather than get along with people in a room.
Per the BBS paper, CUDOS stands for Communism (data belong to the group), Universalism (apply uniform standards to claims and evidence, regardless of where they came from), Disinterestedness (vigilance against potential conflicts that can influence the group’s evaluation), and Organized Skepticism (discussion among the group to encourage engagement and dissent).
If you want to pick a role model for designing a group’s practical rules of engagement, you can’t do better than Merton. To start, he coined the phrase “role model,” along with “self-fulfilling prophecy,” “reference group,” “unintended consequences,” and “focus group.” He founded the science of sociology and was the first sociologist awarded the National Medal of Science.
In my consulting, I’ve encouraged companies to make sure they don’t define “winning” solely by results or providing a self-enhancing narrative. If part of corporate success consists of providing the most accurate, objective, and detailed evaluation of what’s going on, employees will compete to win on those terms. That will reward better habits of mind.
Knowing how something turned out creates a conflict of interest that expresses itself as resulting.
If we embrace uncertainty and wrap that into the way we communicate with the group, confrontational dissent evaporates because we start from a place of not being sure. Just as we can wrap our uncertainty into the way we express our beliefs (“I’m 60% sure the waiter is going to mess up my order”), when we implement the norm of skepticism, we naturally modulate expression of dissent with others.
Expression of disagreement is, after all, just another way to express our own beliefs, which we acknowledge are probabilistic in nature. Therefore, overtly expressing the uncertainty in a dissenting belief follows. No longer do we dissent with declarations of “You’re wrong!” Rather, we engage by saying, “I’m not sure about that.” Or even just ask, “Are you sure about that?” or “Have you considered this other way of thinking about it?”
“I agree with you that [insert specific concepts and ideas we agree with], AND . . .” After “and,” add the additional information. In the same exchange, if we said, “I agree with you that [insert specific concepts and ideas you agree with], BUT . . . ,” that challenge puts people on the defensive. “And” is an offer to contribute. “But” is a denial and repudiation of what came before.
In real-life decision-making, when we bring our past- or future-self into the equation, the space-time continuum doesn’t unravel. Far from turning us into a liquefied blob, a visit from past or future versions of us helps present-us make better bets.
moment thinking where the scope of time is distorted. As decision-makers, we want to collide with past and future versions of ourselves. Our capacity for mental time travel makes this possible. As is the case with accountability, such meetings can lead to better decisions: at the moment of the decision, accountability to our group can pop us briefly into the future to imagine the conversation about the decision we will have with that group. Running that conversation will often remind us to stay on a more rational path.
Away from the poker table, we don’t feel or experience the consequences of most of the decisions we make right away. If we are winning or losing to a particular decision, the consequences may take time to reveal themselves.
In business, if a leader ignores the ideas of an intern because “What does an intern possibly know?” it could take years for that intern to become a successful competitor before that mistake becomes obvious. If the trajectory of that business suffers because of the poverty of new ideas, the owner of the business might never realize the effect of that attitude.
“I stay up late at night because I’m Night Guy. Night Guy wants to stay up late. ‘What about getting up after five hours of sleep?’ ‘That’s Morning Guy’s problem. That’s not my problem. I’m Night Guy. I stay up as late as I want.’ So you get up in the morning: you’re exhausted, you’re groggy. ‘Oooh, I hate that Night Guy.’ See, Night Guy always screws Morning Guy.”
As Nietzsche points out, regret can do nothing to change what has already happened.
“Every 10-10-10 process starts with a question. . . . [W]hat are the consequences of each of my options in ten minutes? In ten months? In ten years?” This set of questions triggers mental time travel that cues that accountability conversation
“How would I feel today if I had made this decision ten minutes ago? Ten months ago? Ten years ago?”
The flat tire isn’t as awful as it seems in the moment. This kind of time-travel strategy calms down the in-the-moment emotions we have about an event, so we can get back to using the more rational part of our brain. Recruiting past-us and future-us in this way activates the neural pathways that engage the prefrontal cortex, inhibiting emotional mind and keeping events in more rational perspective. This discourages us from magnifying the present moment, blowing it out of proportion and overreacting to it.
We would be better off thinking about our happiness as a long-term stock holding. We would do well to view our happiness through a wide-angle lens, striving for a long, sustaining upward trend in our happiness stock, so it resembles the first Berkshire Hathaway chart.
Imagine that you go to a casino for an evening of blackjack with your friends. In the first half hour, you go on a winning streak and are ahead $1,000. You keep playing because you and your friends are having such a good time. For the next hour and a half, it seems like you never win a hand. You lose back the $1,000 and break even for the night. How are you feeling about that? Now imagine that you lose $1,000 in the first half hour and stick around playing with your friends because they are having a great time. In the next hour and a half you go on a winning streak that erases the early loss, and you end up breaking even for the night. How are you feeling about that?
The way we field outcomes is path dependent. It doesn’t so much matter where we end up as how we got there. What has happened in the recent past drives our emotional response much more than how we are doing overall. That’s how we can win $100 and be sad, and lose $100 and be happy.
The problem in all these situations (and countless others) is that our in-the-moment emotions affect the quality of the decisions we make in those moments, and we are very willing to make decisions when we are not emotionally fit to do so.
Having a nuanced, precise vocabulary is what jargon is all about. It’s why carpenters have at least a dozen names for different kinds of nails, and in the field of neuro-oncology, there are more than 120 types of brain and central nervous system tumors.
you blow some recent event out of proportion and react in a drastic way, you’re on tilt.
We can take some space till we calm down and get some perspective, recognizing that when we are on tilt we aren’t decision fit. Aphorisms like “take ten deep breaths” and “why don’t you sleep on it?” capture this desire to avoid decisions while on tilt. We can commit to asking ourselves the 10-10-10 questions or things like, “What’s happened to me in the past when I’ve felt this way?” or “Do I think it’s going to help me to be in this state while I’m making decisions?” Or we can gain perspective by asking how or whether this will have a real effect on our long-term happiness.
Home buyers, understanding that in the moment they might get emotionally attached to a home, can commit in advance to their budget. Once they decide on a house they want to buy, they can decide in advance what the maximum amount they’d be willing to pay for it is so that they don’t get caught up in the moment of the bidding.
Throwing out all the junk food in our house makes it impossible for midnight-us to easily, mindlessly, down a pint of ice cream. But as long as we have a car or food delivery services, that kind of food is still available somewhere. It just takes a lot more effort to get it. The same is true if we ask the waiter not to put the bread basket on the table at the restaurant. We can still, obviously, get bread, but now we have to ask the waiter to bring it. In fact, even Ulysses had to rely on his crew to ignore him if, upon hearing the Sirens’ song, he signaled them to free him.
Ulysses contracts can help us in several ways to be more rational investors. When we set up an automatic allocation from our pay into a retirement account, that’s a Ulysses contract. We could go through the trouble of changing the allocation, but setting it up initially gives our goal-setting, System 2–self a chance to precommit to what we know is best for our long-term future. And if we want to change the allocation, we have to take some specific steps to do so, creating a decision-interrupt.
In all these instances, the precommitment or predecision doesn’t completely bind our hands to the mast. An emotional, reactive, irrational decision is still physically possible (though, to various degrees, more difficult). The precommitments, however, provide a stop-and-think moment before acting, triggering the potential for deliberative thought. Will that prevent an emotional, irrational decision every time? No. Will we sometimes still decide in a reflexive or mindless way? Of course. But it will happen less often.
For us to make better decisions, we need to perform reconnaissance on the future. If a decision is a bet on a particular future based on our beliefs, then before we place a bet we should consider in detail what those possible futures might look like. Any decision can result in a set of possible outcomes.
process can be implemented for any sales team. Assign probabilities for closing or not closing sales, and the company can do better at establishing sales priorities, planning budgets and allocating resources, evaluating and fine-tuning the accuracy of its predictions, and protecting itself against resulting and hindsight bias.
Over the long run, however, seeing the world more objectively and making better decisions will feel better than turning a blind eye to negative scenarios. In a way, backcasting without premortems is a form of temporal discounting: if we imagine a positive future, we feel better now, but we’ll more than compensate for giving up that immediate gratification through the benefits of seeing the world more accurately, making better initial decisions, and being nimbler about what the world throws our way.
As the future becomes the past, what happens to all those branches? The ever-advancing present acts like a chainsaw. When one of those many branches happens to be the way things turn out, when that branch transitions into the past, present-us cuts off all those other branches that didn’t materialize and obliterates them. When we look into the past and see only the thing that happened, it seems to have been inevitable. Why wouldn’t it seem inevitable from that vantage point?
Even the smallest of twigs, the most improbable of futures—like the 2%–3% chance Russell Wilson would throw that interception—expands when it becomes part of the mighty trunk. That 2%–3%, in hindsight, becomes 100%, and all the other branches, no matter how thick they were, disappear from view. That’s hindsight bias, an enemy of probabilistic thinking.