Data Privacy Is for the Privileged
Policies marginalize poorer consumers and consumers with niche tastes, disadvantaging small businesses.
Data Privacy Is for the PrivilegedAlexLMX/Adobe Stock
As I have been reading about and discussing large language models, I find I’ve learned as much about us humans as I have about the artificial intelligence that replicates some of what we do. Introspecting, am I really that much more than an LLM?
I recognize that I have about a thousand stories. Most of my conversations and writing, especially for my blog posts, op-eds, interviews, and discussions, are built on prompts that lead to those prepackaged stories. A given prompt could easily lead to a dozen different stories, so for a while I give the illusion of freshness to someone (not my wife and kids!) who hasn’t been around me that long. House prices are high in Palo Alto, should the government subsidize people to live here? Let me tell you about the vertical supply curve.
Almost all of my stories are not original. I do a lot of reading and talking about public policy and economics, so I pick up more stories about those things than most people who have real jobs and pick up stories about something else. Learning and education are largely formal training for the acquisition of more stories to produce in response to prompts. That process is a lot like training a large language model.
This has got me thinking about programming a Grumpy Economist bot. Training an AI on the corpus of my blog, op-eds, teaching, and academic writing would probably give a darn good approximation to how I answer questions, because it’s a darn good approximation to how I work.
I wouldn’t be the first economist to be automated. George Mason University’s David Beckworth, who hosts the Macro Musings podcast, has trained a Macro Musebot on more than 400 episodes of his show. Even Milton Friedman has been conjured algorithmically, courtesy of the Friedman chatbot at the University of Texas’s Salem Center for Policy.
Now, not everything I do is complete recycling, predictable from my large body of ramblings or from what I’ve been “trained on.” Every now and then, someone asks me a question I don’t have a canned answer to. I have to think. I create a new story.
A great economist asked me for my intuition about how interest rates could raise inflation. It took a week to mull it over. I now have a good story, which helped me in writing a recent paper. Walking back with me to my office at the Hoover Institution after a seminar, Stanford’s Robert Hall asked me how government bonds could have such low returns if they are a claim to surpluses, since surpluses, like dividends, are procyclical. The notion of an “s-shaped surplus process” and a whole chapter of my recent book, The Fiscal Theory of the Price Level, emerged after a few weeks of rumination. It’s now a new story that I tell often. Perhaps too often for some of my colleagues.
This creativity seems like the human ability that AI will have a hard time replicating, though perhaps I’m deluding myself on just how original my new stories are. When I get that AI programmed up, I’ll ask it the next puzzle that comes along.
This line of thinking leads me to recognize a part of my work that will certainly be greatly influenced by LLMs: the writing of blog posts and op-eds, the giving of interviews, and so forth. If 90 percent of what I do in that respect can be replicated, what does that mean for people in the commentary business?
Your natural instinct might be, “That business is toast and will be totally displaced by automation.” Not so fast. Here is an old story, applied to this case. Look at supply and demand in the chart below:
By lowering the cost of writing a blog post or op-ed, large language models could move the supply curve to the right and expand the demand for commentaries.
In the upper supply curve (rising to the right in light blue), I have the supply of commentary, along with where it intersects today’s demand for commentary. LLMs push the supply curve down and to the right, as shown by the dark blue arrow and the new supply curve. I could certainly write more blog posts faster if I at least started with the bot and then edited. A colleague who is further ahead in this process reports that he routinely asks Claude.ai to summarize each academic paper in a 600-word op-ed, and he has found lately that he doesn’t need to do any editing at all.
The curve shifts both down and to the right, however. We can produce more for the same total cost in time, or we can write the same amount faster.
Does that mean that the commentary business will end because the price will crash? Just asking the question in the context of supply and demand curves already tells you the answer is no. At a lower price, there is more demand, so the quantity expands. This could be the golden age of commentary. Indeed, quantity could expand so much that total revenue (price times quantity, or, in the chart, the size of a box with the origin in one corner and the supply-demand intersection in the other) could actually increase!
This has happened many times before. Movable type lowered the price of books. Did bookselling crash, and the monks starve? No. Demand at a lower price was so strong that bookselling took off, and more people made more money doing it. Though, as always, it was different people. The monks went on to other pursuits. Radio, TV, movies, and the internet each had the same effect on the communication industry. Technology that apparently substitutes for humans lowers costs, supply expands, and the market expands.
It’s not so obvious, though, that the demand for commentary is that flat. My inbox is already overwhelmed with papers colleagues have sent me to read and interesting-looking blog posts, and there are about 50 tabs open on my browser with more fascinating articles that I have not read. Related, the “price” in my graph, at least for this column, is the price of my time to produce it and the price of your time to read it. AI lowers the price for me, but not for you.
Now, what you need is an assistant who knows you and can read through all the mass of stuff that comes in and select and summarize the good stuff. That, too, is a task AI seems like it might be pretty good at. There’s a joke (here comes another story I picked up somewhere) in which Joe says, “Look how great the AI is. I can input four bullet points and a whole PowerPoint presentation comes out!” Jane, getting the PowerPoint presentation, says, “Look how great the AI is. It boiled down this whole long PowerPoint into four bullet points!”
Of course, it has to somehow know which stories are going to resonate with you. Current algorithms are said to be pretty good, often too good, at feeding you what you like, but I want new things that expand my set of stories, and best of all, the rare things that successfully challenge and change my beliefs.
Indeed, perhaps AI will be more useful as digestion for information overflow than for producing even more to consume. I long wondered, what’s the point of a lecture when you can just read the book? What’s the point of a seminar when you can just read the paper? I think the answer is digestion. An hour-long lecture forces the professor to say what she can in that allotment. That’s a short time, at best amounting to 10,000 words. Professors, at least in economics, notoriously assign endless reading lists that nobody could get through in a decade. In a lecture, they can’t break the short time limit. They can lose everyone, or they can keep it digestible. Similarly, a good seminar with an engaged audience forces digestion.
In sum, perhaps AI will also help on the demand side, shifting demand to the right as well.
Commentary is also a question of quality and not just quantity. Most commentary is pretty awful. Humans are not that good at reading critically, sticking to the point, maintaining logical continuity, avoiding pointless arguments, remembering basic facts, actually answering questions, and so on. At least the humans on my X stream aren’t. AI editing might dramatically improve the quality of commentary. Just getting it from a C- to a B+ would be a great improvement.
As happens with all technology, AI will need considerable oversight and hand-holding. For the foreseeable future, there will be a need for humans to edit the output of the AI, figure out what prompts to give it to produce writing that will most interest readers, recommend and certify AI-produced material, and so on. The introduction of ATMs increased bank employment by making it easier to open bank branches and offer (overpriced) financial services. (You’ve probably heard that story. I’ve told it quite a few times.) Humans move to the high-value areas.
When I write a column like this one, I have to think things through, and often either the underlying story gets clearer or I realize it’s wrong. If the AI writes all by itself, neither of us is going to get any better. But perhaps the editing part will be just as useful as my slow writing.
A good deal of what I learn from my work comes from conversations that my writing sparks with readers—online, by email, and in person—in which I often find my ideas were wrong or need revising. Once the comments are taken over by bots, I’m not sure that will continue to work. At least until I get a comment-reading bot going.
John H. Cochrane is a senior fellow of the Hoover Institution at Stanford University and was previously a professor of finance at Chicago Booth. This essay is adapted from a post on his blog, The Grumpy Economist.
Policies marginalize poorer consumers and consumers with niche tastes, disadvantaging small businesses.
Data Privacy Is for the PrivilegedBusiness decisions have to be made with one eye on the algorithm.
For A.I. Companies, Technology Is Inherent in StrategySwedish monetary-policy tightening had an uneven effect on employment.
How Rate Hikes Can Exacerbate Labor-Market InequalityYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.