To Tame Inflation, Talk Isn’t Enough
Central bankers’ proclamations have little effect on consumers—but rate hikes matter more.
To Tame Inflation, Talk Isn’t EnoughJosue Evilla
Within months of COVID-19’s first emergence in China, the World Health Organization admitted it was battling, alongside the pandemic, something nearly as dangerous and certainly as complicated: a “massive ‘infodemic,’” in the agency’s words. Ignorance of the virus had evolved into misinformation, rumor, conspiracy theory, and political fodder.
Even the best-intentioned medical advice changed regularly. Anthony Fauci, the top US medical expert on the pandemic, went, in the course of four months, from saying the virus was “not something that the citizens of the United States right now should be worried about,” and discouraging mask wearing, to strongly promoting masks, shutdowns, and social distancing—a path traveled by many other public-health officials and agencies too.
The seemingly contradictory advice at top levels of government, sometimes twisted or repeated out of context, has left many Americans looking elsewhere for guidance. And as the WHO noted, the alternative sources do not always inspire confidence. In an August poll by Politico and market research company Morning Consult, 43 percent of Americans said they would take a COVID-19 vaccine if advised to do so by Fauci or the Centers for Disease Control and Prevention (CDC). But 46 percent said they would listen to family. Americans, it would seem, have more faith in armchair experts than the more conventional variety.
But if the vagaries of COVID-19 might explain this impulse—if it feels like no one knows what’s going on, and so you might as well trust your loved ones over strangers—trust in experts appears to be eroding more broadly too. Almost a third of Americans either don’t think that global warming is happening or aren’t sure, according to the Yale Program on Climate Change and Communication. Growing numbers of parents opt out of vaccination programs for their children that have proven not only safe but wildly effective at eradicating deadly diseases in the West.
This may be in part because, as with COVID-19, expert opinion changes. Prominent economists have admitted they failed to spot the 2007–08 credit crunch or underestimated the pain that free trade would impose on manufacturing communities in the US. There have been significant reversals in medical orthodoxy, from the usefulness of mammograms to the wisdom of hormone-replacement therapy or knee surgery. And the past decade has seen academics tearing down once-hallowed studies for not meeting statistical standards.
But the odds of confronting and overcoming challenges such as COVID-19 and climate change without the benefit of and a great deal of trust in professional expertise seem low. When an effective COVID-19 vaccine is developed and released, it will need to be used widely to stop the spread of the disease.
All of this makes confronting distrust a pressing concern. Fortunately, it’s possible that science, however mistrusted, may be able to restore some balance by helping people understand when and why to trust experts, and by highlighting the limitations of what we know.
Granted, these studies have been conducted by experts. But hear them out.
While Fauci is receiving death threats from Americans who think COVID-19 is a hoax, scientists in general enjoy a high degree of trust among the public. A 2018 report by the National Science Board found nine in 10 Americans agreed that scientists are “helping solve challenging problems,” and a January 2019 Pew Research Center survey found that 86 percent of respondents trusted scientists at least “a fair amount,” while an even greater number said the same about medical scientists in particular.
However, there are signs that trust is eroding. Research by Edelman, the public-relations firm, reported a slight decline between March and May 2020 in the number of respondents to a series of global surveys who felt scientists, doctors, and national and international health officials would “tell you the truth about the [corona]virus and its progression”—though these experts still inspired more trust than did chief executives, heads of government, or journalists.
Economists and bankers don’t fare as well. The Financial Trust Index, a quarterly survey of about 1,000 US households conducted by Chicago Booth and Northwestern, tracks the trust Americans have in institutions, including banks, stock markets, and corporations. The survey was launched in 2008, during the 2008–09 financial crisis, and when it debuted, about 20 percent of respondents said they trusted these institutions. After a decade of economic recovery, and despite record-low unemployment that lasted until the crisis of 2020, the trust index has since climbed to near 35 percent. That’s better than before, but not exactly impressive—unless compared with levels of trust placed in the US federal government, also tracked by the survey. As of the end of 2019, trust in the government hovered around 20 percent, registering a slight decrease among people in both political parties.
Laypeople have some reason to trust themselves in addition to experts; and yet, they are not especially good at knowing how much faith to put in their own predictions.
In 2013, Northwestern’s Paola Sapienza and Chicago Booth’s Luigi Zingales, the researchers behind the trust index, analyzed whether Americans trust and agree with economists. They posed policy and economics questions to both ordinary Americans as well as US economists on Booth’s Economic Experts Panel, run by the Initiative on Global Markets, which periodically surveys several dozen senior faculty—including Nobel laureates, John Bates Clark medalists, past members of the president’s Council of Economic Advisers, and other similarly recognized economists—at elite research institutions. The average American, whose opinion was gleaned from the quarterly Financial Trust Index surveys of representative samples of US citizens, tended to disagree with economic experts, particularly on topics where experts agreed strongly with one another. And nonexperts maintained their opinions even when they learned what the economists had to say.
Distrust in experts is not exclusive to the US. Chicago Booth’s Michael Weber tapped into the IGM European Economic Experts Panel with research partners in Italy and Germany, Luigi Guiso of the Einaudi Institute for Economics and Finance and Ifo Institute’s Sebastian Link. They asked nonexpert Europeans (as well as central-bank economists and German executives) in January about topics ranging from immigration to competition with China, artificial intelligence, and the merits of raising the retirement age. They then tested whether respondents’ opinions changed after learning what the experts on the panel thought.
Though the responses are still being analyzed, Weber notes that he was struck by two early results. First, while expert and lay opinion were more closely aligned in Europe than in the US, lay/nonexpert Europeans were similarly unlikely to be swayed by expert opinion. Second, the researchers find a pattern of polarization. After being told of consensus among experts, respondents with an interest in public policy or economics were more likely to change their opinions to match those of experts, but respondents without those interests, or who had previously indicated they distrusted academic expertise, moved in the opposite direction. “This is something you imagine in the US, but it’s there in Europe too,” says Weber, who grew up outside of Heidelberg, Germany, in a small town whose residents he describes as conservative and respectful of expert opinion.
During the COVID-19 pandemic, residents of that town have observed social-distancing rules with no discernible resistance, Weber says. But elsewhere in Germany, protests against lockdowns demonstrated that “there’s a fraction of the population there that is not interested in what scientists say is safe,” he laments.
Similar polarization extends beyond Europe, according to Edelman, which for two decades has published an annual index probing public trust in nongovernmental organizations, businesses, government, and media. The 2020 index numbers show the gap in trust levels between the informed public—college-educated high earners who report an interest in public policy and business news—and the remaining mass population has increased six percentage points since 2012. The credibility of academic experts follows a similar trend: in 2012, 65 percent of the mass population and 69 percent of informed public respondents said academics were very or extremely credible; since then, the proportion has flatlined in the mass population and climbed to 71 percent of the informed public.
Fewer than half of survey respondents expressed trust in US financial institutions, the federal government, or their state government.
April 2020 survey
Sapienza and Zingales, 2009; financialtrustindex.org
According to Sapienza and Zingales, the lack of trust—in economists, at least—stems from the fact that experts and nonexperts often start with different unspoken assumptions, which leads them to different conclusions about issues such as the North American Free Trade Agreement. For example, in the researchers’ 2013 study, more experts than nonexperts considered government to be trustworthy. The experts, who trusted government to, say, neutralize the painful effects of trade agreements, said that NAFTA has benefited US citizens. Nonexperts, with less faith in government, didn’t share that conclusion. Similarly, the experts saw a decade of rising gas prices as the result of market forces, while nonexperts saw it as the result of energy policy.
Why don’t nonspecialists reason their way through verdicts as dispassionately as experts seem to? And if they can’t, why don’t they acknowledge their knowledge gaps and emotional diversions, and cede this ground to the experts? Tonia Ries, executive director of the Edelman arm that runs the Edelman Trust Barometer, an annual trust and credibility survey, says the general public finds experts more credible than CEOs, celebrities, government officials, and regulators—but survey respondents rank another category of person as nearly as credible as experts: “a person like yourself.”
But research by University of California at Berkeley’s Stefano DellaVigna and Chicago Booth’s Devin G. Pope suggests that laypeople have some reason to trust themselves in addition to experts; and yet, they are not especially good at recognizing how much faith to put in their own predictions. For example, they are bad at estimating how much their own experience and knowledge can or will inform smart decision-making.
A lot of research over the past 50 years has explored when and why one person will trust another, but far fewer studies have focused on whether trust, once given, is well placed. Never mind whether people appear to be trustworthy—what makes people deserving of that trust?
Research by Chicago Booth’s Emma Levine, Hong Kong University of Science and Technology’s T. Bradford Bitterly, Carnegie Mellon’s Taya R. Cohen, and University of Pennsylvania’s Maurice E. Schweitzer finds that the best indicator of trustworthiness is a person’s propensity to feel guilt—or, more particularly, the degree to which she anticipates feeling guilty about potential transgressions.
Economists, psychologists, and sociologists have established that people often decide to trust someone on the basis of the person’s sex, nationality, or social status. It can also depend on the impression the person makes. People seen as competent or kind are more likely to be trusted, and offering apologies also helps. On the other hand, individuals who seem to be concealing information or have broken promises dent their chances of winning anyone over.
To find out what makes people genuinely trustworthy, the researchers conducted a series of lab experiments in which participants had to decide whether to share money or other rewards with partners who had trusted them to do so.
In one test, participants in the top quartile of guilt proneness were 1.6 times more likely than those in the lowest quartile to do what they had been trusted to do. In another study, in which students had to decide whether to share a pack of lottery tickets with someone who had trusted them to do so, participants who rated themselves high in the propensity for guilt were 1.4 times more likely to share the tickets than those with average guilt propensity.
A person’s tendency to feel guilty turns out to be a better predictor of trustworthy behavior than any of what psychologists call the Big Five personality traits: agreeableness, extroversion, conscientiousness, openness, and neuroticism.
The researchers argue that this is because guilt is strongly and specifically tied to a sense of interpersonal responsibility that tends to drive trustworthiness. They demonstrate this in a study in which they manipulated how vulnerable partners in a trust game made themselves to participants. In the study, partners received $20 and had to decide how much of it to pass to participants, who then decided how much of the amount (which was tripled) to share in return.
When partners passed the entire $20, they made themselves very vulnerable and demonstrated a lot of trust that participants would share the wealth. This influenced the behavior of guilt-prone people, leading participants who were high in guilt proneness to feel responsible for their partner and therefore act in a trustworthy manner. On the other hand, when partners did not make themselves particularly vulnerable, guilt proneness had no influence on trustworthy behavior.
“Participants return significantly greater proportions of money to others who have taken substantial risks when trusting them than to those who have not made themselves vulnerable to the risks of trusting,” the researchers write, adding that “highly guilt-prone individuals are not simply more generous; rather, they are sensitive to social expectations.”
This sensitivity can also be manipulated with contextual information. When the researchers introduced into one of their studies a code of conduct that promoted personal responsibility, participants—guilt prone or otherwise—became more trustworthy.
There is a clear lesson for anyone attempting to suss out trustworthiness—perhaps when evaluating a potential business partner, love interest, or friend. “When deciding in whom to place trust,” write the researchers, “trust the guilt-prone.”—Rose Jacobs
Emma E. Levine, T. Bradford Bitterly, Taya R. Cohen, and Maurice E. Schweitzer, “Who Is Trustworthy? Predicting Trustworthy Intentions and Behavior,” Journal of Personality and Social Psychology, September 2018.
After participants answered questions assessing their tendency to feel guilty about private transgressions, they played a trust game in which they decided whether they would split a small bounty of lottery tickets with someone who had just shared with them. The researchers find that more-guilt-prone participants were more likely to share.
Levine et al., 2018
DellaVigna and Pope asked experts and nonexperts to predict the results of an experiment related to incentivizing certain behavior.
Laypeople’s forecasts, especially when aggregated together, were often as good as those of the experts in the study. The average forecast among 208 professors was highly accurate and better than the average forecasts by nonexperts (graduate students, undergraduates, and participants on the crowdsourcing website Amazon Mechanical Turk, who were meant to represent laypeople). Yet by grouping together the forecasts of lay predictors, the researchers find that they were as good as those of the best individual experts.
These findings chime with work by University of Pennsylvania’s Philip Tetlock, whose research over 30 years has suggested certain amateurs are better at accurately predicting outcomes than narrowly focused experts are, even when those predictions are in the professionals’ fields of expertise. He compared these lay forecasters with the polymath “foxes,” a description by the late philosopher Isaiah Berlin for people who see the world through a range of prisms, as opposed to “hedgehogs,” who tend to interpret events through one main framework. For example, Norwegian playwright Henrik Ibsen, according to Berlin, was a hedgehog—his single organizing principle in his work being the need for individuals to break free from societal mores—whereas Shakespeare, a fox, played with a wide array of ideas, experiences, and philosophies that sometimes even contradicted one another.
Tetlock’s results have been embraced by at least one US policy-making institution: the Office of the Director of National Intelligence, which began a process about 10 years ago of identifying (and then honing the skills of) laypeople “superforecasters” whose predictions of possible future global events could be weighed alongside those of trained experts.
Pope cautions that some factual questions are still best asked of experts. “Don’t go to your neighbors if you think you have cancer,” he says, “go to a doctor for that.”
There are, however, certain types of advice, says Pope, in which predictions must be made despite considerable uncertainty, and these might be enriched by a wider scope of experience. For example, when students ask him for help finding a thesis or dissertation topic, he urges them to consult other professors, too, to make use of the wisdom of crowds; the averaged predictive powers of experts in his study with DellaVigna were very good, after all. But Pope suggests his students also talk to their fellow students: “As long as you keep scaling up the lay expertise, it can be as effective as or even more so than the experts.”
There are more reasons to trust experts than just their odds of getting it right—time saving, for example. Karina de Kruiff, a Chicago native who moved to Munich in 2013, is not sure she agrees with all the guidelines the government has issued about COVID-19, but she has largely followed them, returning to work as a kindergarten teacher in late April despite some nervousness about the number of children she would be caring for, and heading to the Alps with her women’s hiking group in May despite it feeling “like we were doing something wrong the whole time.” Whereas she sees friends in the US questioning everything, “here, people do their research but they also say, ‘OK, if it’s allowed, I’ll do it.’”
Research by Samantha Kassirer at Northwestern and Chicago Booth’s Emma Levine and Celia Gaertig suggests this latter approach—to trust experts with tough choices—emerges most often in situations that have a high degree of uncertainty. In a recent study, the researchers investigated how participants reacted to different types of medical advice. Some advice was “paternalistic,” in which hypothetical doctors recommended a specific course of action on the basis of their personal opinion, while other advice was more focused on patient autonomy, and the doctors resisted suggesting a path. The participants preferred the paternalistic option, getting a doctor’s subjective recommendation, even if an objectively correct choice did not exist.
In a study of people’s judgment of medical experts, participants rated doctors who provided two different styles of advice:
Kassirer et al., 2020
The questions probing medical advice were hypothetical, but Kassirer, Levine, and Gaertig find similar patterns in a real-world experiment where subjects could win money, or lose it. They offered participants the chance to take part in one of two raffles, and gave them information about the possible outcomes associated with each raffle to help them decide which best fit their risk appetite. Raffle “experts,” with access to data about past raffles, also weighed in—some with more paternalistic recommendations about what to do, and some who offered further information but no explicit recommendation.
The researchers find that participants most conflicted by the choice responded most positively to experts who provided their personal recommendations. Moreover, if the advice turned out to lead to unhappy results (such as losing money), they didn’t blame the experts. “It seems surprising at first, but less so when you think about the counterfactual,” says Levine. “You might feel angry at your doctor for giving you advice that led to a bad outcome. But you might feel angrier if you felt he sat back and let you make a mistake.”
For Levine, the research resonated with her own experience with a complicated pregnancy, when medical norms made trying to get clear recommendations from a doctor “like pulling teeth.” She sees Americans having some of the same experiences in the back-and-forth over how to loosen lockdowns without endangering public health during the COVID-19 crisis. They know that the experts can’t give definite dates for when certain activities will be safe again—and they might not blame them for that. But they nonetheless suffer if no guidelines are offered at all.
If we cast epidemiologists and clinicians as hedgehogs, who make calls mainly on the basis of their evolving understanding of the coronavirus and COVID-19, who are the foxes that people can turn to for longer-term guidance? Whom can people trust to pursue a reopening of the economy while also keeping in mind virology and biology, people’s social and financial needs, and civil rights? The answer quickly turns political: in the US, some conservatives argue that Democrats have ceded too much power to narrowly focused bureaucrats; many on the left see US president Donald Trump as a fox manqué, pretending to synthesize science he does not understand.
These dueling political perspectives have consequences. A team of researchers—Columbia’s Andrey Simonov, Columbia PhD candidate Szymon Sacher, Chicago Booth’s Jean-Pierre Dubé, and University of Washington’s Shirsho Biswas (a recent graduate of Booth’s PhD Program)—looked at Nielsen television viewing numbers to test for and measure a causal effect of Fox News on compliance with state and local stay-at-home orders between March and July. They measured social distancing using location data from millions of cell phones. To identify the causal effects, they exploited the random assignment of channel positions across US cable markets. Viewers in general are more likely to watch a channel that is high on the dial (such as 1, 2, or 3) rather than lower (101, 102, or 103). Using only the incremental viewership induced by a local channel position in an individual zip code, the researchers find that watching Fox News caused people to social distance less. An additional 30 minutes of Fox News per week caused between 5 and 28 percent of the persuadable audience not to comply with social distancing.
“Fox News has a very large persuasive effect,” says Dubé. What if all the experts got their advice wrong, and social distancing was shown to be useless? Even in that unlikely event, he says, it’s still striking that people would have ignored expert advice coming from the World Health Organization, the CDC, and medical-school faculty in favor of what Fox News anchors were saying.
“I want to know if this is good or bad for society,” he says. “The harm it creates for society is Fox News causes people to distrust experts.” (For more, read “Fox News Causes Viewers to Disregard Social Distancing.”)
Seth Masket, a political scientist at the University of Denver, meanwhile notes that COVID-19 deaths and coronavirus case numbers tend to be less predictive of when a state opens up than are either the governor’s political party or the state voting patterns for the 2016 presidential elections.
Institutions of higher education aren’t immune to party politics. Data collected this spring by the Chronicle of Higher Education reveal that whereas just 45 percent of colleges and universities in states won by Hillary Clinton in 2016 were planning in-person classes for fall 2020, the portion climbed to 80 percent in states won by Trump.
Researchers at Davidson College in North Carolina, led by Christopher Marsicano, demonstrate a similar trend in colleges’ transition to online teaching in March, as the scale of the pandemic was becoming clear. Colleges in states led by Republican governors were slower to stop in-person classes than those in Democrat-led states, independent of campus infrastructure, class size, or ratio of students living on campus. Neither analysis of colleges controlled for coronavirus cases, though the Davidson team points out that decisions to go online usually preceded a confirmed case of COVID-19 at the school. “While the threat of the virus is very real, imminent threats of members of campus communities contracting the virus likely did not end in-person instruction,” they write.
Weber’s work in Europe suggests that leaning right politically has a relationship with skepticism toward experts, albeit a weak one. Other factors such as intellectual interests, risk appetite, and trust in people generally are much stronger predictors.
Science evolves—as does expert opinion, which it informs. Thus, it’s not necessarily antiscience to be skeptical of new information, argues one economics expert—just a recognition of the scientific process.
And while their political views may be responsible for people’s skepticism toward experts, research suggests people are capable of trusting politicians who disregard experts without necessarily disregarding those experts themselves. Booth’s Levine finds in a series of studies on prosocial lying, including with University of Pennsylvania’s Maurice E. Schweitzer, that people can trust someone while at the same time believing that person is a liar. “There are different kinds of trust,” Levine says. “We can trust in someone’s benevolence and goodwill while believing they are lying to us. Many people who support Trump might not believe what he’s saying while still believing he is fighting for them.”
Personal experience, moreover, can move people toward taking expert advice. In a study set in the US, University of Texas’s Olivier Coibion, University of California at Berkeley’s Yuriy Gorodnichenko, and Weber used the staggered introduction of coronavirus lockdowns to examine their impact on consumers’ spending habits, economic expectations, and trust in institutions. They find that, even controlling for how people voted in the 2016 election, being under lockdown increased a person’s esteem for the CDC. “I’d guess people were really scared of the situation, and looking for guidance and leadership,” says Weber. “They found it in the CDC.”
Science, however, evolves—as does expert opinion, which it informs. The CDC updated its guidance about COVID-19 as health professionals gained more information and evidence. But it takes time for such evidence to accumulate, and then for early research findings to be reviewed, tested, and ultimately accepted as bankable knowledge—or rejected and replaced by other hypotheses. Thus, it’s not necessarily antiscience to be skeptical of new information, argues one economics expert—just a recognition of the scientific process.
Booth’s Kevin M. Murphy says it’s important to distinguish between short-term news and long-term, established knowledge. The former is the “flow” of knowledge, which can be provocative but may soon enough be proven incorrect. This includes new ideas that may never have solid support, even if they come from people positioned as experts. The latter is the “stock” of knowledge that is valuable and has taken years to establish.
“Our best answers are likely to come when we can apply proven ideas and principles to new problems. Things often seem ‘unprecedented’ when we look at them in terms of the details but much less so when you think harder,” he says. “To me, the key in applying economics is to see the commonality with problems you have studied before.” What can we learn from past problems, and existing ideas? What does that say about the current situation, including the limitations of new ideas? Murphy sees new theories as a last resort, and at convocations he has counseled new graduates to think “inside the box”—not just out of it, although that may be fashionable.
Of course, when time is of the essence—such as in the midst of a global pandemic, when scientific understanding changes quickly—there may be value in new data and ideas. But that is when it is particularly important to depend on established knowledge, he maintains. A theory created to address the current challenge may be enticing but also shaky when time is most precious.
“Given all of this, I think it is important to be humble up front,” he says. “We need to be clear that we are still learning about the situation and that our ideas may change. To maintain confidence, we should avoid the most speculative predictions and earn people’s confidence through honesty and making clear the limits of our expertise. We know some things but not everything. We often forget that policy decisions must consider more than our expertise. Most importantly, we should view what we provide as input not as answers. We often say that people who disagree with us ‘are not listening,’ but they may be listening and just disagree.” Experts make mistakes, and to say or imply otherwise undermines their credibility.
University of Chicago’s and Booth’s Lars Peter Hansen, a Nobel laureate, explains in a Chicago Booth Review essay how to make sense of new information as it comes in. Data, he writes, though vital to scientific understanding, are also frequently open to interpretation. “While we want to embrace evidence, the evidence seldom speaks for itself; typically, it requires a modeling or conceptual framework for interpretation. Put another way, economists—and everyone else—need two things to draw a conclusion: data, and some way of making sense of the data.” He and others seek to model how to “incorporate, meaningfully acknowledge, and capture the limits to our understanding,” and to understand the implications of these limits. (For more, read “Purely Evidence-Based Policy Doesn’t Exist.”)
It can be extremely valuable, he writes in a different essay, for decision makers to be confronted with existing scientific evidence from multiple disciplines, at the same time respecting the limits of this knowledge, in order to make informed decisions in the moment. In many settings, there is no single, agreed-upon model but rather a collection of alternative models with differing quantitative predictions. Take epidemiological models enriched to confront both health and economic considerations, for example, especially those used to inform policies such as lockdowns. “I worry when policy makers seemingly embrace one such model without a full appreciation of its underlying limitations. It can be harmful when decision makers embrace a specific model that delivers the findings that they prefer to see,” writes Hansen, expressing hope that we can stop pushing model uncertainties into the background of policy making. Hansen sees potential promise in efforts to integrate economics and epidemiology when building models aimed at guiding efforts to confront health and economic trade-offs in future pandemics. But there are knowledge limits in both fields that should be recognized when using such “integrated assessment models,” he says. (For more, read “How Quantitative Models Can Help Policy Makers Respond to COVID-19.”)
The gap between epidemiologists and economists, as between nonexperts and experts, can be wide—but much is to be gained by bridging it. Experts in any field can make mistakes by failing to recognize the reality in which others live, which affects how their advice will land.
Sapienza and Zingales conclude their paper on the divergent opinions of experts and nonexperts not by wagging fingers at ordinary Americans for not getting on board with expert opinion, but by urging experts to take off their professional blinders. “The context in which these [expert opinion poll] questions are asked induce economists to answer them in a literal sense,” they write. “Hopefully, the same economists, when they do policy advice, would answer the same questions very differently.”
Similarly, policy makers and the media can try hard to ask their questions from new angles, and with context in mind, much like the New York Times did in May in asking epidemiologists not when they think it will be safe to send children to school or step on an airplane—questions that alone might yield lots of “when there is a vaccine” responses. Instead, the newspaper asked epidemiologists when they plan to do these things themselves, forcing them to consider a wider range of factors than those contained in their area of expertise.
Still, as policy makers look not just to virologists, but economists, sociologists, historians, and others for guidance on how to cope with this pandemic and its fallout, responses couched in caution and hedged for public consumption may feel inadequate. Here, Penn’s Tetlock offers some succor. His team’s efforts to identify superforecasters uncovered several factors that boost performance, and one was public scrutiny. Which is to say, today’s skeptical citizens, if truly paying attention, may be improving the very advice they are scrutinizing.
Central bankers’ proclamations have little effect on consumers—but rate hikes matter more.
To Tame Inflation, Talk Isn’t EnoughDemand may be the key to healthier diets.
What Would Make Americans Eat Better?When New York introduced a deposit law, retailers raised prices on consumers.
The ‘Bottle Bill’ Price BoostYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.