How the gambler’s fallacy makes for bad decisions
- By
- August 31, 2015
- CBR - Behavioral Science
The number of people driven from their homes by war and persecution reached nearly 60 million worldwide last year—roughly equivalent to the population of the United Kingdom, and the largest number since the Office of the United Nations High Commissioner for Refugees began keeping records in 1951. About 1.66 million people applied for asylum or refugee status in 2014, also the highest number since World War II.
These people bet their lives on the quick decision of an immigration judge typically facing an enormous caseload. “The volume alone is like a traffic court, and yet the stakes for someone who asserts a claim of asylum . . . if I am wrong—or even if I’m right but, because the law doesn’t allow me to grant relief, I have to deny them—they could be going back and facing death,” Dana Leigh Marks, president of the US National Association of Immigration Judges, told Al Jazeera America in July 2015.
Marks and her colleagues make their judgments by reviewing the merits of each case. But a surprising factor influences their decisions: the outcome of the prior case. That information shouldn’t have any bearing on the next judgment, but research shows a small, yet significant, tendency for asylum judges to decide a case in the opposite fashion from the previous case.
Asylum judges aren’t alone in displaying this effect. Loan officers are less likely to approve a loan if they approved the previous loan application they reviewed. Baseball umpires are less likely to call a pitch a strike if they deemed the previous pitch a strike.
People think they’re acting rationally, but behavioral-science research has proven over time that in thinking so, they are deceiving themselves. Our human brains are wired to use heuristics, to take shortcuts, and to evaluate choices based on imperfect information. These tendencies lead inevitably to biases and consequential mistakes in decision-making.
One well-known logical flaw is the gambler’s fallacy, which tends to affect not only roulette players but anyone forced to make a series of similar decisions. This logical-reasoning mistake causes people to think that the outcome they’ll achieve from a decision is likely to be the opposite of the previous outcome or series of outcomes. For instance, if a spin of the roulette wheel lands the ball on red, the next spin will land it on black; if the last applicant was worthy of asylum, the next one will be deported. But that’s not how life usually works: the series of choices is often random, and the right decision for the current case has nothing to do with the decision for the previous case. These faulty assumptions can spur poor choices for gamblers, and anyone else faced with making repeated decisions.
“One downside of human decision-making seems to be that [we] suffer from these types of biases,” says Kelly Shue, associate professor of finance at Chicago Booth. Shue; Daniel Chen of ETH Zurich; and Tobias J. Moskowitz, Fama Family Professor of Finance at Chicago Booth, studied the decisions of US asylum judges, Indian loan officers, and Major League Baseball umpires.
In the financial markets, the mental misfirings of loan officers can hobble investment returns. Chen, Moskowitz, and Shue examined loan-application reviews from a field experiment conducted in India. In the experiment, loan officers attending a training session were asked to review application files for real small-business loans, allowing the researchers to compare the participants’ decisions with real-life loan performance. The loans were classified by the researchers as performing (loans that were approved in real life and did not default) or nonperforming (approved loans that later defaulted or were rejected in the first round of review). The job of the loan officers was to review the files and guess which category each one belonged to.
The 188 loan officers, with an average of 10 years’ banking experience, were randomly assigned to one of three groups. One group received a small payment for every loan they approved, regardless of quality. The only incentive to make good decisions was to maintain one’s reputation. The second group received monetary incentives to approve good loans and reject bad ones, and the third group received payment to approve good loans and lost money when they approved bad ones.
The study uncovers the gambler’s fallacy in action: when the loan officers received payment for any loan they approved, regardless of quality, they were 8 percentage points less likely to approve the current loan if they approved the previous loan. The effect became stronger if the officer had approved the two previous loans in a row.
On the other hand, when the loan officers received stronger incentives for their accuracy, “these effects become muted,” the researchers write. The appearance of the gambler’s fallacy also decreased with education, age, and experience, as well as with a greater length of time spent reviewing the loan application.
To some, the findings might support an argument for more automation in the approval process. Loans can be scored mathematically, so it’s possible for a computer to take the available information and spit out an acceptance or rejection for each applicant. A loan officer, however, might be better equipped to listen to the applicant’s tone in conversation and make inferences about the person’s circumstances that aren’t covered in a credit report. “If you think there are extenuating circumstances, a machine can’t understand that in the same way,” Shue says. “There are trade-offs.”
Prior studies of this problem came from computer modeling and lab experiments, not field work, which left a hole in academics’ knowledge, Moskowitz says: “You can learn something from those experiments, but you never get a sense if [participants] would behave differently if the stakes were really high.
Shue adds, “Our paper explores one prediction that comes out of a lot of lab studies, which is that people tend to expect the opposite of what they just saw because they expect a high rate of alternation. We show that people continue to think this way when making decisions as part of their real, full-time jobs.
This effect can cause people to make the wrong decision, the researchers find. “The evidence is most consistent with the law of small numbers and the gambler’s fallacy—that people underestimate the likelihood of sequential streaks occurring by chance,” they write. These mistakes are most likely to occur in specific situations, according to the researchers: when the decision-maker has less experience; when the previous decisions have been on a streak; when the case in question is similar to the previous case, or is decided immediately afterward; and when there are no serious consequences if the decision-maker gets it wrong.
Shue and Moskowitz suspect the gambler’s fallacy might play a role in other real-life settings. For example, looking at job candidates’ resumes could lead to the same phenomenon. “I would fully suspect you’ve seen this if you’ve seen a couple of good candidates come in or made some recent positive decisions; you’re probably negatively biased the next time,” Moskowitz says.
Exogenous events could also spur a rush to judgment, which could push decision-makers toward heuristics that rely on feelings, not facts, to make choices. In courtrooms, for example, there’s evidence that judges make different decisions depending on how close they are to lunch, Moskowitz points out.
Can people move beyond their biases? Shue and Moskowitz don’t directly study this question, but other researchers have looked at what happens when people are told about the tendency to make poor decisions based on ingrained biases. For instance, Shue says, researchers have reminded participants that flips of a coin are not correlated. This method has limited success, she notes: “People seem to have this very fundamental belief that there’s going to be alternation,” so that when a coin lands on heads, the next flip is likely to land on tails. That’s not true, but people don’t see it that way.
In baseball, where cameras can determine with certainty whether a pitch is a ball or a strike, there is a definitively correct answer, making it reasonable for machines to make the determination. That prospect might scare umpires into paying more attention to their calls. “If they were graded on every single pitch, how well or how poorly they did in getting it right, pitch by pitch, you’d see less [bias],” Moskowitz says. “Awareness and incentives can help.”
According to the research, baseball umpires could use those nudges. The researchers examined 1.5 million called pitches—those cases in which the batter does not swing, and the umpire must make a judgment—during the 2008–12 Major League Baseball seasons. They find that umpires are 1.5 percentage points less likely to call a pitch a strike if they called the previous pitch a strike. “This effect more than doubles when the current pitch is close to the edge of the strike zone (making it a less obvious call) and is also significantly larger following two previous calls in the same direction,” the researchers write. “Put differently, MLB umpires call the same pitches in the exact same location differently depending solely on the sequence of previous calls.”
Those findings might indeed make umpires fear being replaced by robots. After all, tennis has done away with human line calls for serves, Moskowitz notes, because the human eye simply can’t see movement that fast. But professional baseball is unlikely to eliminate umpires, out of a desire to maximize entertainment value, he says. He suggests that batters might be able to alter their response at home plate based on how the umpire called the previous pitches.
Moskowitz is currently working on a sports-betting study, comparing gamblers’ strategies to those of stock-market investors. “I’m always interested in how people behave and make decisions under uncertainty,” he says.
Shue, who focuses her research on behavioral finance and economics, is interested in “departures from rational decision-making and ways that people make mistakes in a predictable way,” she says. She is working on further research related to contrast effects, another behavioral-economics take on decision-making gone wrong. The new research, with Samuel Hartzmark, assistant professor of finance at Chicago Booth, focusing on public companies’ earnings announcements, examines how people may perceive a situation differently depending on what they’ve just seen. Investors are often less impressed by one company’s earnings numbers if another unrelated company just announced very impressive earnings. Logically, this doesn’t make sense, Shue says. She’s studying how this bias could affect financial markets.
It’s one thing to lose money based on faulty logic, but when it comes to the decisions of immigration judges, it’s not just money at stake, but human lives. In their study, Chen, Moskowitz, and Shue examine data from the Transactional Records Access Clearinghouse on 150,357 decisions in US refugee asylum cases considered in immigration courts by 357 judges from 1985 to 2013. Each court covers a geographic region, and cases in that region are randomly assigned to each judge.
The researchers find that moderate judges—those who do not show a strong pattern of approving or denying most requests—are about 1 percentage point less likely to grant asylum to an applicant if the previous case was approved. Further, “after a streak of two grants, judges are 5.5 percentage points less likely to grant asylum relative to decisions following a streak of two denials,” they write.
More education might help judges use their accumulated expertise to adjust their reasoning, once they are made aware of potential biases. Legal experts also have advocated for a more independent and effective appellate review in the United States, giving applicants for asylum a better chance to have incorrect decisions reversed. “Currently, the federal courts defer excessively, especially in the Southern circuits, to decisions of immigration courts and the Board of Immigration Appeals, even though those decisions appear to depend to a large extent on the identity, personal characteristics, and prior work experience of the adjudicator,” noted a 2007 article in the Stanford Law Review.
As behavioral finance and behavioral economics delve deeper into the workings of our minds, more information about how humans make choices is coming to the fore—and it’s often unflattering. There’s much more work to be done, first to recognize our biases, and then, perhaps, to address them. If unconscious bias creeps in when we’re aiming for fairness, where else does it subvert our best intentions?
Understanding the subconscious elements of our decisions is a formidable challenge, and studies in behavioral finance and behavioral economics are getting us there. One thing we now know: to be fair, we may have to keep our well-intentioned thoughts from getting in the way.
Chicago Booth’s Sam Peltzman discusses his research on the factors associated with greater happiness.
Is Money or Marriage the Key to Happiness?And if you did, what would you actually say, and when and how would you say it?
Would You Call Out a Microaggression?Telling a customer ‘it could be better’ can make them less willing to buy.
To Sell More, Say LessYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.