Capitalisn’t: The Money behind Ultraprocessed Foods
NYU’s Marion Nestle joins the Capitalisn’t podcast to examine the role of Big Food in public health.
Capitalisn’t: The Money behind Ultraprocessed FoodsJennifer Tapias Derch
Climate change poses substantial short-term and long-term challenges for society and policy. The design and conduct of prudent government climate policy is difficult for at least two reasons. First, climate change is a global problem, ultimately requiring coordinated responses among varied political and governmental entities. Second, there are pronounced limits to our understanding of the long-term consequences of economic activity on the future climate and the subsequent economic opportunities and welfare outcomes. Both challenges are important, but much of my research in recent years has focused on the latter.
The aim of my research is to build quantitative models that incorporate uncertainty when used to assess alternative policies. What the models imply about which policies to adopt will necessarily depend on the priorities or perspectives of the policy maker, and in particular on how averse she is to uncertainty. While it is not my role as a researcher to prescribe the decision maker’s preferences, the methods I and others are developing make it possible to determine which components of uncertainty should be most concerning to policy makers and what the resulting course of action should be depending on the distaste for uncertainty. I think of these methods as helping to quantify uncertainty in the service of evaluating policies.
Let’s begin by disentangling two related but distinct terms: risk and uncertainty. There are many references to risk within academic and public-sector discussions of climate-change policy (and more generally in analyses of important problems in economics and finance). Specifically, when discussing public- or private-sector preferences—about everything from where to invest resources to how to manage a pandemic—mentions of “risk aversion” are commonplace. Empirical studies of financial markets often refer to “risk premia” when characterizing differences in the expected returns to financial assets or to actual investments in physical, human, or intangible capital.
Influenced in part by the late Chicago economist Frank Knight, I will use uncertainty as an encompassing term and use risk to represent something more narrow. Formal models of decision-making based on risk aversion presume that probabilities are known, but outcomes are not—think of rolling dice or spinning roulette wheels, for example. Thus, when I refer to risk, I mean a particular type of uncertainty that includes a confident knowledge of probabilities.
But especially for longer-term notions of uncertainty, including potential climate change, making confident probability statements is at best challenging and arguably contrived. Opening the hood, so to speak, on where these probabilities come from often leads to some well-warranted skepticism. It is for this and other reasons that I find it valuable to push beyond risk as I’ve defined it above.
If policy makers or econometricians struggle to come up with fully credible probability models, perhaps we should rely on the private sector to fill in the holes. Information based on asset market data has appeal because such markets are forward looking. Market values today depend on investor beliefs about the future. Perhaps those in the private sector who are actively engaged in “risk management” have all of the necessary incentives to do this well.
An empirical strategy based on this perspective, used frequently in macrofinance analyses, presumes that the private sector has, to a good approximation, figured out credible probabilities. This is the so-called rational expectations hypothesis. For the first couple decades of my research career, I adopted this approach in devising and applying econometric methods.
While there are many interesting research contributions looking to financial markets for revealing evidence, this evidence can only take us so far in understanding uncertainty and climate change. The information revealed in these exercises is limited for three reasons.
First, the historical data pertinent for quantifying climate-change uncertainty are limited as we push economies into potentially new territories. Typical risk-based empirical analyses presume rich historical evidence to infer expected returns reliably.
Like all data, climate data need to be interpreted through a framework or model to yield useful insights.
Second, research to date has done little to isolate the impact of different sources of climate-change uncertainty, but rather has used broadly based climate or even more extensive ESG (environmental, social, and governance) factors to isolate climate-based shocks and their induced market-based compensations. However, these shocks could combine policy uncertainty with what is often called “transition risk” (uncertain technological development and uncertain adaptation to changes in economic opportunities) and “physical risk” (uncertain climate outcomes in response to changes in economic activity) in discussions of climate change.
For instance, many climate models imply that the temperature response to a “pulse” of emissions relative to a baseline will build substantially over a decade and then flatten out. There is substantial cross-model disagreement, however, about the level at which the response becomes flat. Moreover, there are additional concerns about “tipping points” at which the temperature response to emissions is nonlinear. These thresholds and their economic consequences can be highly uncertain, and we have limited historical experience to guide us. Bundling all of these sources of uncertainty together into a climate risk factor severely limits the takeaway from an asset pricing approach.
A third impediment to using model-based rational expectations to estimate probabilities is that such expectations are necessarily subjective in the case of climate economics. Moreover, this is an arena in which it is particularly problematic to just “let the data speak.” Like all data, climate data need to be interpreted through a framework or model to yield useful insights, and even where we may have good and plentiful data, we often lack a confident understanding of how to interpret them. The models we rely upon to digest these data and to assess policy implications are necessarily “misspecified” or wrong. We use models as coherent simplifications to help us understand a complex reality, but not to fully describe that reality. Our models are, in this sense, wrong, but nevertheless they can still provide policy makers valuable insights. Although we know they are wrong, we typically do not know their exact flaws, giving rise to an additional form of uncertainty.
Despite their limitations, climate economic models have become an essential complement to empirical evidence. So how can we use them to support prudent economic policy, given the uncertainty inherent in their predictions?
One way to help manage that uncertainty is to measure it. To engage in what many scientific disciplines call “uncertainty quantification,” my coauthors and I often find it valuable to embark on formal sensitivity analyses—exercises meant to establish how much each facet of uncertainty affects a policy decision. This includes the sensitivity to how we weight the predictions across alternative models and the sensitivity to potential model misspecification. Sensitivity analyses of this type allow us to embrace the task of uncertainty quantification. While there are potentially many dimensions of uncertainty to consider, our approach reduces this high dimensional problem to one that can be characterized with just a few dimensions by isolating the components of uncertainty that should be most problematic to a policy maker.
Decision theory is a way to frame uncertainty quantification in practice. For instance, consider the varying predictions across alternative climate models. To assign probabilities over the climate outcomes they imply requires that we start with subjective probabilities across the models. For instance, should we give all the models equal weight, or do we consider some to be more credible than others? What are the policy implications of changing these relative weights in a particular way? How do we use any such model sensibly when we acknowledge its potential misspecification?
The sensitivity analyses that we perform help us to engage in more forthright assessments of policies. The policy question could be at what level to set a carbon tax to combat the climate-change externality missed by competitive markets. Alternatively, it could be about how much to subsidize research and development of new technologies that reduce the degradation of the climate system. By specifying how averse the decision maker is to model ambiguity or to potential model misspecification, the outcome of the sensitivity analysis becomes all the sharper.
This aversion can have a big impact on the quantitative assessments of alternative policies. Model predictions can be in the form of best guesses about possible outcomes as well as in the form of worst-case scenarios. Both of these forms are of interest, and the policy makers’ (or their constituents’) aversion to uncertainty dictates how much attention we should pay to each. Given the dynamic nature of climate change, uncertainty aversion captures the trade-off between acting now or waiting until we learn more. This trade-off, while seldom analyzed formally in the climate economics literature, is central in the design of prudent policy making. Waiting until we figure things out can wind up being costly for our global society if the bad climate and economic outcomes come to pass.
Research I conducted with Arizona State’s Michael Barnett and University of Wisconsin’s William Brock characterizes both of these trade-offs—between best guesses and worst-case scenarios and between acting now and waiting to learn more—first by showing how much aversion to broadly based notions of uncertainty can increase the social cost of carbon, and second by exhibiting that prudent climate policy includes both cautious responses in the short run and later adjustments based on new information. In regard to both of these findings, the possibility of the adverse consequences of climate change can be sufficient to make it prudent to act boldly now, even if our knowledge is incomplete.
Lars Peter Hansen is the David Rockefeller Distinguished Service Professor in the University of Chicago Departments of Economics and Statistics and at Chicago Booth.
NYU’s Marion Nestle joins the Capitalisn’t podcast to examine the role of Big Food in public health.
Capitalisn’t: The Money behind Ultraprocessed FoodsA variety of data from the Great Recession shows a significant reduction in death rates, partly because of less air pollution.
The Upside of Recessions: Cleaner AirGovernment-supported re-skilling can be good for workers and taxpayers.
Line of Inquiry: Anders Humlum on How to Help More Injured WorkersYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.