What Would Make Americans Eat Better?
Demand may be the key to healthier diets.
What Would Make Americans Eat Better?Chris Gash
The push for transparency and regulation has been pervasive in many fields, perhaps nowhere more so than in accounting and finance. Just as securities regulation grew out of the 1929 market crash, in the wake of the 2007–10 financial crisis, the US Congress passed the Dodd-Frank Wall Street Reform and Consumer Protection Act. The premise of this act and many prior acts is that transparency, clear rules, and better oversight will provide the sunlight and structure needed for markets to work efficiently.
Regulation, and even transparency, is not a panacea, of course. Economists have warned people for decades about the risks and unintended consequences of regulation. So how do we handle this? Evidence-based policy making is often proposed as a solution. It is a rigorous attempt to base policy decisions and new regulation on empirical evidence, including impact studies, cost-benefit analyses, and academic research in general.
This is a valid impulse, the appeal of which is obvious. Who wouldn’t want science and empirical evidence to guide policy decisions? Policy making that is more rooted in sound theory and ample data, and less influenced by political pressures and lobbying, should in theory lead to better rules and regulations. If rules were smarter, they would help prevent major accidents while encouraging, rather than obstructing “forward movement”—becoming the regulatory equivalent of guardrails, like those on highways and racetracks.
But if we are serious about policy making that is supported by facts and data, we have to create the research foundation to support it. It is one thing to say evidence-based policies would be a good idea, but bringing this type of rulemaking to accounting and financial regulation (my area of expertise) would require substantial investments in an infrastructure for conducting, aggregating, and sharing research in these areas. Evidence-informed policy making is doable, but it would require time, effort, and money. Looking at medicine—a field that started going down this path many decades ago—illustrates this point.
Policy makers and regulators are under pressure to embrace a more evidence-informed approach. I saw this up close, through my work with the Public Company Accounting Oversight Board, which oversees auditors, and the Financial Accounting Standards Board, which the US Securities and Exchange Commission designated to set accounting standards for public companies. The FASB, among other standards setters, has recently started conducting postimplementation reviews of its standards. The PCAOB and financial-market regulators are conducting similar reviews. In the United States, several congressional initiatives are under way that would require formal economic analysis. In the United Kingdom, financial agencies are required to conduct cost-benefit analyses for proposed rules.
But evidence-informed policy making is easier said, or demanded, than done. At present, research in accounting and finance is still far from collecting the data and amassing evidence that is needed for evidence-based policy making.
In this, we can learn from research in medicine. Evidence-based medicine—which, as its name implies, is systematic reliance on the findings of modern, well-conducted research in medical decision-making—is one of the most important breakthroughs in medical care. According to the BMJ (formerly the British Medical Journal), it’s up there with the discovery of antibiotics. Yet it took a lot of effort, and decades, to make that happen.
We can create a broad pool of the kind of confidential data we need to promote research for informed policy making.
It involved researchers holding courses and conferences, writing journal articles on the idea, creating extensive guidelines for research and systematic reviews, and developing databases. It also required a great deal of financial support. The Cochrane Collaboration is an independent organization formed to organize research findings so as to facilitate evidence-based choices about medical interventions faced by doctors and policy makers. It is a global independent network of thousands of researchers from 130 countries who “work together to produce credible, accessible health information that is free from commercial sponsorship and other conflicts of interest.”
This effort provides an example or standard to which we in other fields can only aspire.
At a higher level, finance and accounting have some similarity to medicine. While medical researchers study the effects of drugs, we study the effects of regulations. Does a new regulation help or hurt? Were specific provisions in Dodd-Frank worthwhile? These are, in a way, questions similar to those asked in medicine.
That said there are many differences. For one, researchers in accounting and finance face several major challenges that medical researchers often do not in terms of data collection and research design. First, the rise of evidence-based medicine has been closely connected to randomized-control trials, which are frequently used to test the efficacy of a drug or medical intervention. In an RCT, participants are randomly assigned to groups in which they end up receiving the treatment being studied, or a placebo. The results of different groups are then compared. For example, a study might give 100 mg of a drug to a group of patients, then measure the dose’s effect on mortality. If people in the group receiving treatment do better overall, this suggests a causal treatment effect from the drug, which then can be used in formulating practice guidelines or perhaps even policy.
RCTs are the gold standard in medical research, but they’re less used in finance and accounting, where it’s tough to run such studies in many settings. We typically cannot randomly assign rules (or treatments) for the most important regulatory issues that we’re facing. As a result, we have to rely on observational data and outcomes without randomization. Based on such data, it’s tough to know whether Dodd-Frank rules, say, help or hurt.
Moreover, market regulators create reforms, but they don’t arrive with the equivalent of a 100 mg prescription. We generally don’t know how much (regulatory) treatment is implied by a new law. Thus, we face issues with measuring both the treatment and the outcomes.
Take a new corporate disclosure policy, for example—we can’t say if it increases information quality by X percent in part because we don’t have a standardized measure for the amount and quality of financial information. Moreover, we typically can’t attribute changes in information or outcomes in capital markets solely, that is, causally to a regulatory reform. Ideally we would like to be able to say, “If you increase information in capital markets—that is, public information—by 1 percent, market liquidity will change by X percent, and the cost of capital will drop by Y percent.” To do this, we’d need much better ways to measure information and also much more granular data to measure outcomes. To achieve both, we’d need help from standards setters, regulators, and policy makers.
We would also need a lot more research, especially using randomization. There are often hundreds of studies for a single medical treatment question. Once we have many studies, including ones that provide causal inferences, we can use another favorite tool of medicine: meta-analysis. Medical researchers pool together similar studies and derive from them a statistical conclusion. There’s a lot of value in examining the same question in different settings and using diverse research methods to answer it. Although meta-analyses are quite common in medicine, we have hardly any for finance and accounting—as we have fewer studies, and not many involving randomization. Having meta-analyses would improve the reliability of the evidence we could share with regulators, and it would help researchers communicate robust findings.
Thus we have work cut out for us—we need to do a better job measuring induced policy changes and identifying causal effects, and we need much more research on each policy issue. These challenges are quite daunting. But if the goal is more systematic use of evidence in policy making, we have to overcome these challenges.
To me, the glass is half full. Yes, there are many challenges and we have to tread carefully, but there are also the costs of poorly designed or implemented regulation too. So if we want to head down this path, it requires time, effort, and investment. Here are four steps to get us started.
Obtain and create more and better data.
We’ve seen an explosion of data in the social sciences, but data availability is still a big challenge. In accounting and finance, much of the relevant data are often proprietary or not observable to researchers. As a result, researchers rely on relatively crude or highly aggregated proxies.
To explain: most of the data sources we work with—say, consolidated financial statements—are highly aggregated. A single corporate document encapsulates the performance of hundreds of subsidiaries and aggregates thousands of transactions. As a result, when we talk about the effect of an accounting standards change, we’re drawing our conclusions from what are essentially summaries of business activity rather than detailed evidence of the effects of the new standard on the actions of individual subsidiaries and decision makers.
Consider the case of a change in the accounting standard for when a company takes an impairment—that is, when it adjusts the value of an asset on its balance sheet to reflect a lower market price. To see how the new standard changes impairments, we would have to, at a minimum, calculate what the impairment would have been under the old standard and see how much it changed under the new standard. Yet, when we see an impairment charge, it’s typically aggregated and reflects many assets and decisions, and is potentially conflated with a lot of other things, including current business conditions. This makes it difficult to see solely the effects of the new rule.
To know the new accounting standard is working the way it’s supposed to be working, we’d also want to see what the company considered when deciding whether or not to take an impairment, including what it considered in situations that did not lead to an impairment. But for us to study the issue at such a granular level, we would need companies to track this information and thus create the data. We’d need data-keeping requirements built into regulations, as otherwise companies or their auditors likely would not collect the necessary data.
Toward this end, we need the help from policy makers. Moreover, such data are obviously highly proprietary. Thus, we need to find ways to access and share these data in a confidential manner. The PCAOB, for example, has created a Center for Economic Analysis, which maintains proprietary data such as which auditor has been assigned to a particular job, how many hours an auditor bills, how an auditor rated client risk, and so on. This center, for which I worked as an economic advisor, has made these data available, with certain safeguards, to academic fellows who apply to it with research projects. There are similar arrangements at the US Census Bureau. This shows that there are ways to share data and make them available for analysis without violating confidentiality, and that we can create a broad pool of the kind of confidential data we need to promote research for informed policy making.
Increase reliability and replication.
If research is to inform policy, the reliability of our findings is obviously critical. Unfortunately, there’s increasing evidence that replication rates in the social sciences are quite low. For instance, in psychology, a large group of scholars pooled their efforts for something called the Reproducibility Project, and set out to replicate some of the most important studies in the field. The findings are the subject of some dispute, but fewer than half the studies they set out to validate could be reproduced. A similar effort in experimental economics also indicates reproducibility rates well below what would be implied by reported statistical significance levels in the underlying studies.
One might argue that the heavy reliance of accounting and finance research on easily accessible databases should increase the reproducibility, and it might. But this research is also often forced to lean on naturally occurring or quasi experiments (ones in which the experimental conditions are determined by forces other than the researchers), which arguably gives more discretion to researchers. Thus, the jury is still out on the reliability and reproducibility of research in accounting and finance. Besides, there is increasing evidence in many fields, including economics and accounting, of published studies showing patterns that are consistent with selective reporting of statistically significant results.
What to do? We need to find ways to boost the reliability of our findings. More replications are surely part of the solution. And we would need journals or platforms to publish these results (including “null” results that don’t appear to find a hypothesized effect, which are often difficult to publish). Moreover, we need more research that tests similar questions in slightly different settings, as this would help researchers gauge the robustness of their findings.
The Critical Review of Finance has created a Replication Network—a welcome initiative. Preregistering studies—publicly committing to a plan in advance, before gathering data—could help mitigate issues related to researcher discretion. Ultimately, we need to explicitly discuss the reliability of our research findings and discover ways to counter the shortcomings in the research and publication process.
Improve transmission of research findings.
If uncovering and validating relevant evidence is one-half of evidence-informed policy making, the other half is conveying that evidence meaningfully and in a neutral fashion to policy makers. There are various ways that research can be influenced or misrepresented on its way to policy makers.
And even if research findings arrive intact, policy makers might misunderstand or even misconstrue them. For one thing, they might have an incentive to cherry-pick evidence to legitimize their chosen policies. As Princeton’s Alan S. Blinder put it, “Economists have the least influence on policy where they know [and agree] the most; they have the most influence . . . where they know the least and disagree most vehemently.” Setting political influence aside, there is the question of training—do policy makers have the time and skills necessary to interpret social-science-research findings and especially their limitations? If not, do they have the support needed to make sense of the evidence? For these reasons, we cannot leave the transmission process of research findings to its own devices.
It could help to aggregate findings by policy issue or question, creating clearinghouses for research studies and systematic reviews along the lines of the Cochrane Collaboration in medicine. Cochrane’s systematic reviews gather all available primary research and summarize the best evidence. They evaluate the strength of the evidence, look for biases, conduct meta-analyses, and provide conclusions and implications for practitioners.
We could create something similar for finance and accounting research. Such clearinghouses could conduct systematic literature reviews on certain policy issues, using explicit criteria for evaluating the strength of evidence as well as structured summaries that would help policy makers.
We’ve seen clearinghouses emerge in other areas of economic study. The US Department of Labor’s Clearinghouse for Labor Evaluation and Research is an example of this: it not only summarizes the findings of research in 14 areas of labor economics, but also assesses the quality of the research design and methodology. In my view, the idea of clearinghouses is appealing and a step in the right direction, especially if they are operated independently and their reviews follow scientific guidelines. And related to my earlier suggestion, we could even stipulate that regulators and companies provide certain data to these clearinghouses around regulatory changes, so that these data could be analyzed, which would further promote research on regulatory questions.
Encourage cross-fertilization.
Accounting, finance, and many other fields relevant to financial policy are often organized by research method, without much cooperation between these methods. There’s even less cooperation and cross-fertilization across fields—finance, accounting, economics, and sociology often operate in silos. However, policy questions do not live in these silos; they cut across them.
Part of the infrastructure for evidence-based policy should include research circles organized around topics and questions rather than fields and methods. We could convene conferences on regulatory issues and encourage broad participation by people from different areas. We could create journals that focus not on discrete academic disciplines but on all the research relevant to particular policy questions. The UChicago Crime Lab takes a cross-disciplinary approach to understanding and preventing crime; why not have a financial-regulation lab?
Plenty of people think the regulatory burden is already high enough, perhaps too high. For them, some of the suggestions and in particular a call for more data-keeping requirements for companies might sound like another onerous responsibility. But if you think that policies are costly to business and that policy makers are sometimes overshooting, this is all the more reason to study regulation and its effects. The fact that many regulations are perceived to have unintended consequences and come with high costs is exactly why businesses and societies need to invest in smarter rulemaking informed by evidence and research. To fix something, start at the foundation.
With concerted effort and investment, we could make significant progress toward more systematic use of evidence in policy making. Without it, we will only pay lip service to the idea.
Christian Leuz is Joseph Sondheimer Professor of International Economics, Finance, and Accounting at Chicago Booth. This essay is adapted from the 2017 PD Leake lecture, sponsored by the Institute of Chartered Accountants in England and Wales, and a recent article written by Leuz.
Demand may be the key to healthier diets.
What Would Make Americans Eat Better?Monetary policy makers set the stage for inflation but were slow to respond when it appeared.
The Case for and against Central BankersPolitical philosopher Patrick Deneen discusses how to reorient the economic system for the common good.
Capitalisn’t: A Conservative Critique of CapitalismYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.