Why Charities Can Have Strange Spending Habits
Nonprofits prioritize flashy projects to induce people to give.
Why Charities Can Have Strange Spending HabitsDR MANAGER/Shutterstock
The firing and subsequent rehiring of OpenAI CEO Sam Altman raises fundamental questions about whose interests are relevant to the development of artificial intelligence and how these interests should be weighed if they hinder innovation. How should we govern innovation, or should we just not govern it at all? Did capitalism “win” in the OpenAI saga?
In this episode of the Capitalisn’t podcast, hosts Bethany McLean and Luigi Zingales sit down with Chicago Booth’s Sendhil Mullainathan to discuss if AI is really “intelligent” and whether a profit motive is always bad. In the process, they shed light on what it means to regulate in the collective interest and if we can escape the demands of capitalism when capital is the very thing that’s required for progress.
Sendhil Mullainathan: Even if OpenAI behaved perfectly, that’s not going to stop anybody else from developing. I think that the distortion OpenAI has had in this conversation is it’s made everyone think this is a monopolistic or oligopolistic market. It is not at all. It’s a free-for-all.
Bethany: I’m Bethany McLean.
Phil Donahue: Did you ever have a moment of doubt about capitalism and whether greed’s a good idea?
Luigi: And I’m Luigi Zingales.
Bernie Sanders: We have socialism for the very rich, rugged individualism for the poor.
Bethany: And this is Capitalisn’t, a podcast about what is working in capitalism.
Milton Friedman: First of all, tell me, is there some society you know that doesn’t run on greed?
Luigi: And, most importantly, what isn’t.
Warren Buffett: We ought to do better by the people that get left behind. I don’t think we should kill the capitalist system in the process.
Bethany: Obviously, what happened at OpenAI is part of the inspiration for this episode.
Speaker 8: The tech world has been thrown into chaos over the weekend when the company that gave us ChatGPT fired its CEO. Sam Altman, who has drawn comparisons to tech giants like Steve Jobs, was dismissed by the OpenAI board Friday. The move came as a complete surprise to everyone. Employees are threatening to quit or follow their former bosses to Microsoft. The tech company scooped up OpenAI’s cofounders, Sam Altman and Greg Brockman.
Speaker 9: Let’s talk about the big news that broke overnight, because a lot of folks are focused on it already this morning. OpenAI posting on X that Sam Altman will now officially return as CEO. Former Salesforce CEO Brett Taylor and former Treasury Secretary Larry Summers will join OpenAI’s board.
Bethany: But we’re mainly interested in that because it raises much, much bigger questions. Namely, what are the best mechanisms to provide governance of AI? More broadly, how should we govern innovation, or should we just not govern it at all?
Luigi: This is intimately connected to the idea of capitalism itself. After all, the decentralization of innovation decisions in the hands of people who have money or can raise money is the essence of capitalism.
Marx was actually a techno-optimist who loved the innovation occurring in his time, but he wanted to change the control of this innovation process. He wanted to put—at least in its ideal form—the control of innovation in the hands of workers. This is what the soviets actually were about. Ironically, you can say that OpenAI is the ultimate soviet firm. There is a group of workers who control the firm and their decisions.
Bethany: That is such an interesting way to think about it. In some ways, it’s the flip side of the argument that the New York Times and others put forward in the wake of the coup and reverse coup at OpenAI, which is that capitalism won.
But, Luigi, actually, come to think about it, I’m not so sure I agree with you. Because in the end, this was all about Sam Altman, and he’s not a worker, at least not in Marx’s view of workers, I don’t think. Far from it. And I don’t think OpenAI’s workers or any other Silicon Valley workers are what Marx was envisioning when he thought about workers as part of the public.
I guess what I’m getting at is, in a world where Silicon Valley “workers” can make enough money to pretty much escape many of society’s ills, it’s difficult for me to be happy about the idea that those who stand to profit the most from AI are those who control the decisions, whether they’d be CEOs or workers.
And as the press pointed out, OpenAI’s employees rallied around Altman, yes, putting them all together as workers, but they had a reason to do so. There was a major cash payout on the horizon, and I don’t know if Marx was thinking about major cash payments. What do you think? Do you still think Marx’s framing is right, Luigi?
Luigi: First of all, what is Sam Altman? He’s not a capitalist. He invested no capital. His power comes from his intellectual contribution and his work. He is a proletarian in the Marxist sense, and he basically led a revolt against the previous board, surrounded by all the other workers. He surrounded himself with the soviet of workers, soviet in the sense of the council of workers who decided that who was in power was not who deserved it and appointed their own masters. In a sense, Larry Summers is really the person that has been put there by Sam Altman to represent the workers.
Bethany: I just don’t think that the workers at OpenAI or the workers in Silicon Valley or at any large, successful technology firms are workers in the sense that Marx thought about workers as part of the public. But OK, let’s move on. I see your point, and I still like it, even if I think I’m quibbling about what the definition of a worker is.
But another question raised for me by the OpenAI story is that in its original charter, OpenAI said, “Our primary fiduciary duty is to humanity.” But how can that be OpenAI’s primary duty when the capital costs of AI are so very, very high?
In this reconfigured OpenAI, post-coup or reverse coup, the board won’t have veto power, or at least it won’t have the ability to shut down the company in an instant the way the old board did. Their preferences will be balanced alongside others, like those of the company’s executives and investors. Microsoft now has a nonvoting observer board seat.
Luigi, can you escape the demands of capitalism when capital is the very thing that’s required? Is any governance that doesn’t contribute to the bottom line doomed to failure in the face of huge capital requirements? And what do you think that says more broadly about how we govern innovation?
Luigi: In certain sectors, we do observe different forms of governance that are very successful. When you see cooperatives, for example, there are cooperatives where either the consumers or the workers are in control. They raise capital mostly in the form of debt, but they tend to be sectors that are not very capital intensive, precisely for the reason you’re describing.
The challenge is when you need a lot of capital, and you need a lot of capital without having the benefit of a lot of collateral. If you need a lot of capital for something like an airplane, that’s relatively easy to raise because you can lease an airplane, or you can borrow against the asset, and so, you can get the service of the airplane without controlling the airplane. And so, it’s much easier to secure capital in that way.
When you basically have to spend all this money in operating expenses because you’re buying computer power to run models, at the end of the day, maybe the only property you have is the property of the model. I don’t think that this naturally leads to a lot of debt financing. You need to have equity financing, and equity holders want a return.
However, OpenAI was structured in a novel way, in which they were giving a return to investors, but with a cap. I think that they are experimenting.
By the way, some of the people from OpenAI left a couple of years ago, and they founded Anthropic. Anthropic also has an interesting structure that is even more complicated than OpenAI. Anthropic has a trust on top of the company. Now, this trust is receiving some shares that are Class C or Class D, and these shares eventually are going to get the majority of directors but not immediately. So, you have a temporary phase in which the venture capitalists are in control to shape the beginning of the innovation, but then, later on, with the massive use of the innovation, you are going to have a trust with an equal humanitarian fiduciary duty at the top.
Bethany: We wanted to bring on as a guest a colleague of Luigi’s to talk about this. Luigi, I’m going to let you introduce Sendhil, since he is a friend and a colleague of yours.
Luigi: It’s with great pleasure that I introduce my colleague, Sendhil Mullainathan, who is the University Professor of Computation and Behavioral Science at the University of Chicago Booth School of Business.
Bethany: Sendhil, one of the things I’ve been struck by is that we now toss around this term, artificial intelligence, and we all presume that we know what we’re talking about. But do we? Would you define AI, artificial intelligence, differently?
Sendhil Mullainathan: Yeah, it’s a great question. I will admit that I try to have it both ways. I want to, in my mind, sneer at people who call algorithms intelligent. And then, when given a chance, I’ll also use the word “intelligent.” So, I am definitely trying to have my cake and eat . . . I mean, who among us isn’t guilty of trying to have their cake and eat it? Cake is delicious.
The way I think about it is it’s pretty clear that we now have many algorithms, not just the ones like LLMs, like ChatGPT, that do things that are what we would think of, in some vague use of the word, as intelligent.
And we’ve forgotten . . . You would have thought the most basic form of intelligence is being able to do math. In 2000 B.C., scribes who did a little mathematical calculation were among the most valuable forms of intelligence. In some sense, Excel is the embodiment of a form of intelligence. It’s actually quite remarkable what Excel can do. It’s just we’ve been with it so much, we’ve forgotten it.
I think what I would say is that I wouldn’t fixate on the exact meaning of the word “intelligence” except to realize these tools can do a lot of things that we tend to think of as mental work. But the flip side of it, the danger of it, is that because they can do mental work, especially in the last five years, we look at them and think they’re doing mental work the way we are. And so, we immediately make inferences about what else they can do right, how they’re going to get better and better.
We’ll do things like interact with ChatGPT and say, “Oh, my God, if it can do this, imagine everything it can do.” And that’s because we say, “If a human could do this, that human would also be able to do this.” But the thing to remember is these algorithms do amazing things that look like mental work, but they’re not functioning with the same architectures that we are. Generalizing from our intuitions of how we would do it, what else we could do, all those generalizations are quite dangerous.
Luigi: Now, I’m no expert, but what I read is one definition of artificial general intelligence is to have a computer that is advanced enough to outperform any person at most economically valuable work. There are, of course, two important implications of that. One is more important, if you want, for labor economics, what people do. I think that a lot of people are focusing on this.
I would like to focus on the other dimension. When you enable somebody to do something super powerful, you enable greatness; you also enable great evil. So, this concept of alignment, which, again, my understanding is, how do we make sure that we build something that is super powerful but not super evil?
Sendhil Mullainathan: Yeah, yeah. I’ll go with your definition, but I just want to—
Luigi: No, feel free to do whatever you want.
Sendhil Mullainathan: No, it’s helpful. I want to point out a weakness in that definition that people don’t usually point out. It’s horribly unambitious. Forget modern algorithms. Just take Excel. Think of calculating the accounts of GE, just one company. Without Excel, we could not have GE and keep its accounts. Do you know how many scribes would be needed just to maintain the accounts of GE? Just for one company, no one human could do in a lifetime what Excel applied to GE spreadsheets does in five minutes. That simple piece of code is doing things that are unimaginable to humans, which are unlocking things that were unimaginable before, which is now that we can do accounting at scale, we can have corporations at a scale we never imagined.
So, I think that those kind of definitions of AGI are so human-centric just because they’re not appreciating what these algorithms can do—as you put it, Luigi, both for good and bad. They can do stuff we as humans could never have imagined, and then, I think your governance question becomes first order.
The big mistake everybody is making in governance is that they are acting like this is every other form of regulation we’ve encountered. Like, oh, OK, we’re going to regulate—I know it’s not a good example—a certain kind of derivative. But this is a technology whose shape we don’t really understand and whose evolution we’re unsure of. To me, this is an extremely unusual moment, and if you guys know some historical analog, I’d love to hear it. But we’re not regulating the known; we’re regulating both what is unknown today and the evolution of it.
Luigi: Since you challenged me, or us, let me try. Let me try—
Sendhil Mullainathan: Yeah, I’d love it.
Luigi: —since we both sit on the campus of the University of Chicago, pretty close to the place where the first controlled nuclear reactions took place 81 years ago. First of all, it’s interesting the level of risk that Enrico Fermi took to do that on campus, in the second-largest U.S. city at the time. The historical account I got—I’m not an expert on this— is that neither the city of Chicago nor the U.S. government, which, by the way, was financing the initiative, had any idea what he was doing.
Sendhil Mullainathan: Amazing.
Luigi: And any idea of the risk that he was taking.
Sendhil Mullainathan: Wow.
Luigi: My view is the only major difference is that this stuff, number one, was mainly financed by the government. The government had huge power, was behind it, and financed it. And two, paradoxically, because we were in a period of war, there was a sense of more responsibility to the nation. There was—I’m sorry, I forgot the name—a British scientist who would not share intelligence with the Americans because he wasn’t sure he could. This was a scientist who felt so much loyalty to Britain that he wouldn’t want to share that with . . . I’m not so sure we want to live in this world. But, anyway, I want to know your reaction to this comparison.
Sendhil Mullainathan: Oh, I love this. I just looked this up. It’s amazing. It says here that the experiment took place in a squash court beneath the university football stadium. I mean, that is amazing. The first controlled nuclear reaction, some guy above is playing squash. That doesn’t really give you a wonderful, calm feeling, does it?
Luigi: No.
Sendhil Mullainathan: I mean, now we’re worried about free speech on campus. Those people were having nuclear explosions. What the hell? This is just, like, crazy s—t. Anyway.
Luigi: But, actually, how do we know that they’re not doing something worse in OpenAI?
Sendhil Mullainathan: So, yeah, I think the nuclear reaction is a perfect example. It’s an unbelievably powerful technology, and I think it’s a good starting point.
Let me articulate two things that I think are very different, which actually make it an even harder problem. One is, enriching uranium or plutonium to be able to get a sustained reaction is a lot of money. On the other hand, building a large language model is not like that, and it’s getting easier and easier. And while it’s in the benefit of companies to tout that they’re light years ahead of everybody else, the open-source models are getting really good. Anyone can put them up. It’s not obvious that in three years we’ll be like, oh, it’s true, the first generation looked like that, but we learned a lot. They paid the fixed cost. Now, anybody in a lab with a modest amount of money can start to build things that are their own, and maybe it’s not even large language models—whatever the next generation becomes.
In that sense, if we use the nuclear-reaction analogy, imagine what nuclear, whatever we call that, the nuclear innovations would look like if it only cost $10,000 to acquire uranium and enrich it. That would be a crazy world. That would be a crazy world. But that’s not that far from where we are. It’s like $100,000, $200,000, a million to train. Two million, five million, ten million. Ten million’s not a lot of money. Fifty million’s not a lot of money. It’s not such a large barrier, and these numbers are going down. I think that’s one big difference, which actually, like you say, really changes the nature of it. We’re not trying to regulate . . . If we think we’re trying to regulate OpenAI, that’s a mistake. Because the problem runs much wider than that.
The second thing, I think, that is noticeably different is that as consequential as a nuclear bomb or nuclear power would be, it’s contained where you can expect to see it. A nuclear bomb is an object you can drop on some location and have tragic consequences.
These technologies have incredibly wide application. Think of what censorship now can look like in the modern era. Before, you used to have censors have to read stuff. Now, can you build algorithms that read everything anyone says and automatically censor? What about, if you’re a dictatorial regime, automatically finding people that you should send your police after?
I think the width of applications, the breadth of applications, and the ubiquity of access makes this just such a complicated problem. And I don’t want to understate the complexity of the problem because I think what’s happening right now on all the alignment stuff, all this stuff, is people are understating how hard the problem is and thereby settling for what appear to be just . . . I don’t know, Band-Aids, like we’ve done something for the sake of saying we’ve done something. These are hard problems. And so, I think, I . . . Yeah.
Bethany: I would have gone in a different and perhaps more prosaic direction than Luigi. I would have pointed to our financial system, as exemplified by the global financial crisis in 2008, as an example that we don’t understand and can’t chart and can’t figure out where things are going. No one, still to this day, could tell you why a blow-up in subprime mortgages affected every corner of the financial . . . I mean, people have answers for what happened, but no one can chart out a linear math proof as to this did this and this did this and this did this. It was, once again, an explosion that then had all these unexpected ramifications.
I think we have all these failures of regulation and of our predictive powers in the past, in ways that make what’s happening now perhaps a little more frightening. But when you say that we can’t regulate and that we’re understating the scope of the problem and just basically doing things to say we’re doing something, then what do we do instead? Do we do nothing?
Sendhil Mullainathan: Yeah, I love it. No, no, no. I think we should definitely do stuff. Sorry, I don’t want to imply that. For me, I always just like to first and foremost pay respect to the complexity of the thing I’m solving.
I love your finance example, though. If you look at the history of financial regulation . . . It’s not, and you two would know much more about this, so I’d actually like to hear from you. My impression is it was a sequence of unexpected shocks. Somebody invented the idea of a share in a company, and that led to a bunch of weird bubbles. And that led to a bunch of things where investors were getting fleeced. Then, we had to regulate the sale of those shares and what a share could look like. Then, that led to governance problems, and we had to regulate governance . . . They all led to something weird.
But you asked the question about . . . Let me just throw out some proposals, and I’m curious what you all think. One proposal that I’ve been fond of is, don’t regulate the algorithm at all, or you can if you want, but regulate the user of the algorithm, so that the liability sits entirely with them. I don’t care how you did it. You use an algorithm, you posted . . . It doesn’t matter to me how you did it, you are responsible for it.
That would change a lot of things. For example, a lot of people are adopting medical chatbots or thinking about adopting medical chatbots. Now, we know ChatGPT has a lot of hallucinations and things like that. What’s the incentive of these people adopting medical chatbots to get rid of these hallucinations?
Right now, it’s in this weird gray area. Is anyone responsible if this thing gives bad medical advice? If we say, yeah, the person responsible for it is the person under whose banner this advice is being given, boy, would we see the health system become a lot more skittish about just cavalierly . . . As they should. They shouldn’t hide just because it’s an algorithm giving the advice. If one of your nurses did this, you’d have a medical-malpractice lawsuit. Nothing has changed. We don’t care that it’s an algorithm.
That’s not a bad default, I think, to start off at, which is there’s a person who is deploying this thing, and that person should be liable as if the algorithm was acting in their stead. If we start from that default, we would then ask the question, why should I give a safe-harbor clause to anybody to be able to say, “Hey, I didn’t do it, my algorithm did it”? That’s a principled question I could then start answering. What are circumstances where I want to give that safe-harbor clause?
But the default would be, no one has it. We’d have to actively give a safe-harbor clause. That would strike me as at least one proposal, that however things evolve, we now have some control over the situation. We would say, everybody’s incentivized, and we would give safe-harbor clauses to promote innovation.
For example, we’d say, look, we’ve decided there is some value in people who don’t have access to medical care being able to get access to algorithms that read their X-rays. So, we’re going to give a safe-harbor clause in those situations to expand care, but under these circumstances, so we can see whether this is actually causing more harm than good. Fine, now, it’s not. Now, you’d say, I’m getting rid of the consequences of medical liability in order to expand something.
Luigi: I agree with you that there, to some extent, regulation arrived too late. So, you need to intervene from a governance point of view. But I think I’m very humble after the experience we just went through from the turmoil at the top of OpenAI. Ironically, OpenAI initially was chartered with the best idea: our primary fiduciary duty is to humanity. That’s what the OpenAI charter says. And they were governed—they’re still governed, as far as I know—at the top by a board whose fiduciary responsibility is to humanity. But then, they seemed to behave in a very different way. So, how do we get out of it?
Sendhil Mullainathan: But I’ll go back to the wideness of it. Even if OpenAI behaved perfectly, that’s not going to stop anybody else from developing. I think that the distortion OpenAI has had in this conversation is it’s made everyone think this is a monopolistic or oligopolistic market. It is not at all. It’s a free-for-all.
I mean, it’s in the interest of the people at the top to convey the idea that they’re the ones that control everything, but it’s very unlikely that that level of innovativeness is not going to be much more widespread.
I would even double down on your governance thing. Even if we could govern how OpenAI does this, Google does this, and whatever, take the top . . . Meta and how they all do this, there are going to be places in China that can have it. There are going to be places in Iran that can download and start running their stuff. They have great technical people. There is a Pandora’s-box problem here.
Luigi: I completely buy the fact that OpenAI is not a monopolist, but I don’t believe this is a perfectly competitive market. If it was a perfectly competitive market . . . First of all, OpenAI did not need the $13 billion from Microsoft to develop. It could have happily remained a not-for-profit without raising that amount of money.
Again, if this was a perfectly competitive market, when Sam Altman walked out or was forced out, and some of the employees or most of the employees were walking out, OpenAI would say: “No problem. We’ll hire some other people.”
I think we live in a world that certainly is not a competitive world or a monopoly world. It is an oligopoly of a few people, and these people end up having a disproportionate amount of influence on the future of humankind. I think that the option of regulation works very well for this liability issue, but it doesn’t work very well to direct the future of this.
This is where I think we need governance, but I don’t know exactly what governance we need because as you said, it’s not just the governance of one individual. It is a broader governance.
Paradoxically, I thought about this, and I was talking with my colleague, Oliver Hart. He told me that if this situation with OpenAI had happened not in California but in the state of Washington, how different it would have been. We know that the state of California does not enforce noncompete agreements. And so, I don’t think that Sam Altman could have walked away and gone and worked for somebody else if he was working actively at Microsoft in the state of Washington, but in California, he could do that, and so could the 600 people who threatened to leave with him. So, paradoxically, this freedom creates your problem on steroids.
Sendhil Mullainathan: What’s great about your thing is, OK, let’s create a fork. We’ll do A and then B. A is, let’s assume this is an oligopoly. I like the way you’ve put it. If it’s an oligopoly, we have at least a good corporate-governance framework to think about. We have had many proposals in the past that we’ve toyed around with, and maybe it’s time to figure out which of those might actually work where we say, look, getting it to serve shareholders is hard, but it doesn’t just serve shareholder interests but also serves broader public interests.
Do we have some board members who are tasked and appointed for the public interest? We’ve had proposals like this, and I love the way you’re putting it because now this feels like a manageable problem. What type of in-the-weeds regulations could we have involving, can an employee leave from here and actually have a noncompete? What kinds of monopsony are we allowed to have? What would it mean to actually have a board of directors, some of whom were solely focused on the public interest? That’s not an unreasonable question. Could we implement that?
I think the other fork in the road is, let me try my best just to say, you guys should walk away from this at least keeping the possible hypothesis that we don’t have an oligopoly. Here’s one way to think about it. The best argument for the oligopoly, I think, is that all the training data that OpenAI has, other people won’t have. That’s the best argument, in my mind.
But outside of that, the billions that are being spent on compute, the irony of it is that our ability to do what OpenAI was doing just a year ago, the compute cost of that has gone way down. That’s the innovation that’s happening. The innovation that’s happening is we’re learning how to do this stuff at lower and lower and lower cost.
It’s why the open-source models are actually extremely good. I’d encourage you to just try them. Are they at ChatGPT? No, but for a funny reason. It’s not obvious to me that they’re not much closer than they seem. OpenAI has done a bunch of stuff inside, instead of just using their language model, to make it look good to the average user.
The open-source community is just building the language model, the workhorse, and not doing this other stuff. I don’t know what the real gap looks like if you get rid of the fringe stuff. And the open-source models are the best way to see this. That is not an extremely well-funded community, but those models are extremely good. And for a lot of applications, you would be just happy to take those down.
I like the idea of regulating or putting governance in these places, but I do think we should also just keep a strong hypothesis that this stuff’s going to get out of the bag. I guess the other thing is, I’m a little cynical. It is in the interest of these few companies to portray the image that they are the only ones that matter. It’s in their strong financial interest, extremely strong. And so, I am skeptical of that, partly because of that reason, but also just, yeah, play with the open-source models. You’ll see how good they are, which then should make you say, wait. It’s just the open-source community. If they’re getting this good, what the hell is happening?
Bethany: Speaking of money and financial incentives, I have a really basic question, which is, I think it’s always really tempting to think, for-profit, bad, not-for-profit, good. If we just make these companies not-for-profit or get not-for-profit representatives on the boards, we fix things, and they’re a countervailing force. But is it really so clear, when it comes to AI, given the myriad of motivations that people can have, that the profit motive is always bad? And is it so clear that a not-for-profit agenda fixes anything?
Sendhil Mullainathan: I think when we align it as profit, not-for-profit, we hide the true differences of opinion that exist independent of the profit motive as to what socially good means. On the alignment literature, a lot of the alignment that people are doing, it’s not obvious there’s broad agreement for it. Or at least, there’s certainly not universal agreement. We’re actually giving power in the nonprofit structure to the people who are deciding . . . For example, here’s one I agree with. It feels like having these things use racist epithets is not a good thing. But there are many people out there who would say, in a joke, why is that a problem?
But as you go down the type of things we align on, it gets more and more divergent. An example could be, if you look at the latest versions of ChatGPT, like GPT4, recently, because they’ve started doing more and more of the alignment, it’ll just refuse to do certain things in the interest of alignment that you’re like: “Really? We’re not supposed to do that? That’s just weird.”
I think this just speaks to your point that calling it profit versus not-for-profit hides the fact that we don’t have collective consensus on what alignment would look like, and that is a deep problem. At least we have collective consensus on what profit looks like, I mean, for better or worse.
Luigi: But I think you exaggerate a bit. If you go into social issues, I think, of course, we may quickly disagree, but when you think about hurting humankind in a major way, I hope that the disagreement is much less.
I think we should learn from the mistakes we made in the past. If you look at, for example, Facebook, Facebook experimented with a lot of behavioral-economic stuff to maximize engagement. At least from what I read, they basically paid zero attention, zero, to the consequences, the harm that it would create, for example, to young women or to minorities that are persecuted in various countries. I don’t think that protecting the mental health of young girls or protecting the minorities in Myanmar are things that we would vastly disagree about. And if you’re only focused on profit, you don’t give a damn. At the end of the day, they made a lot of money, and they don’t care.
We should not go into the details of being super politically correct, because that’s where we get bogged down. But on some fundamental principles, we should intervene.
Sendhil Mullainathan: I think it gets more complicated far more quickly than you’re making out, Luigi. For the mental health of adolescents, should Facebook prevent the posting of photos where people are in bikinis and look very thin? What if I showed you there’s lots of good evidence that that starts to create eating disorders? Now, that seems all philosophical, but first, it’s actually real. It’s a real concern. The genuine mental health of teenagers is at play. But second, this is why content moderation has proven impossible. There is no platform that’s doing content moderation that many people are happy with. Everybody’s bothered.
Luigi: No, but . . . Sorry, let me interrupt you there because you are absolutely right that the implementation is very complicated, but I don’t want this to be an excuse that we do nothing. In particular, in the case of Facebook, it’s not that they made a mistake on the right trade-offs. They didn’t even consider certain things. From the whistleblower evidence, they were brought evidence that this was problematic, and they paid zero attention. Now, you are saying we are not on the first-order condition of the right . . . But if you put zero weight, it is zero weight.
Sendhil Mullainathan: No, no, no. I’m definitely not saying we shouldn’t do anything. I think I’m asking the question, how should the nature of what we do reflect, for example, the heterogeneity of opinions that are there? Here’s an example. I’m not even saying this is a good idea, but if we just took the idea that we’re going to have board members that represent public interests, the fact that public interest is varied actually raises an interesting question of, how should those people be chosen? It’s not unreasonable to say, we’re going to have some form of public participation in the choosing of those board members. Maybe the right thing is to actually have an equilibrium where some companies have some opinions reflected, and other companies have different opinions reflected. This is not an argument for doing nothing. It’s more just an argument for saying, given that we live in a pluralistic society, what does it mean to regulate in the collective interest?
I don’t think we want to end up in a situation where we do nothing, which I agree with. I also don’t think we want to end up in a situation where we only do that thing we all can agree on, because that seems too little. All of us will be unhappy. I think we need to understand, how are we going to allow what America has done well historically, which is some level of pluralism? How are we going to have pluralism of governance, where there’s always some minimum level of governance, but there’s different opinions as to how this ought to be governed being reflected, if that makes any sense?
I’m not proposing nothing. I’m more asking, how do we get . . . Maybe this is a different way to punch the question back to you. We clearly need some innovation in governance. How are we going to get experimentation in governance going? Because that’s what we need at this point. We need some experiments in governance.
Luigi: We had on our podcast earlier this year Hélène Landemore, who is a political scientist at Yale, and she’s a big supporter of the idea of citizen assemblies. Basically, they are randomly drawn groups of the population that deliberate on these issues. What do you think about this idea?
Sendhil Mullainathan: I love this. These technologies also enable certain kinds of governance we never had before. Why do we need to have a board member that we pick from the elite who goes and sits . . . We have a lot of things. Citizen governance is awesome. We could have people making votes, certain subsets of people, on specific design choices, on specific alignment questions. There’s just so much that you could imagine doing, and I think that it’s going to require some amount of just trying and seeing what happens.
If there was a way we could encourage, either in the AI space, or not even in the AI space, let’s just pick a traditional sector . . . What would it mean to have citizen governance in, I don’t know, utilities? That’s been a persistent question. Utilities are supposed to be in the public interest. We have some regulator, we have some . . . What would it look like to have some governance innovation in the utility sector? That’d be kind of interesting. No one’s going to bother us because it’s still just electricity, gas, and water. But at least we’d have a place where we tried stuff, and we’ve gotten some citizen governance or other stuff. I’m sure if we scoured, we’d find some really direct voting on specific issues that would be interesting, like shareholder proposals that never get people to vote for them. Could we actually, really, radically expand the voting in those areas? There’s cool stuff that could happen, and it feels like . . . This one’s awesome. This is great. Oh, this is super interesting.
Bethany: I love that idea because it also brings a bit of optimism into the discussion. We think about the negatives of AI, but this is a way in which conversations about the governance of AI could actually become conversations more broadly about governance and perhaps it could reflect . . . Backwards isn’t quite the right way, but it could reflect in a way that’s helpful.
Anyway, since we’re running out of time, I actually had a last question for you. Remember Marc Andreessen, a few months back, penned this manifesto? Penned, that’s an odd choice of words. I wonder why I used it. Anyway, he wrote this manifesto that was essentially the manifesto of the techno-optimist. And there are also, obviously, a lot of techno-pessimist people out there who say that AI is going to be the death of humanity. Where would you put yourself on that spectrum? Or do you not think that that’s the right—
Sendhil Mullainathan: Before I answer that, I have two things. One is, what do you have to do in life to be able to write a manifesto? Because I would love one. I don’t know that I have anything to manifest, but I would love to reach the point where I can write a manifesto. That’d just be neat.
Bethany: That’s a good question.
Sendhil Mullainathan: The second question is . . . This is just a piece of psychology. What is your best guess, based on everything I’ve said, of where I land in the optimism-pessimism camp? Just out of curiosity.
Bethany: I think, weirdly enough, you’re more optimistic than your language would suggest. If I were just to look at the literal interpretation of your words, I would think you were fairly pessimistic. But perhaps it’s your tone or perhaps it’s something else—I get a sense of optimism.
Sendhil Mullainathan: You are totally right. I’m just so basically optimistic. My view is these technologies are going to make us all better off, for sure. The question is, how do we make sure that happens? Because there is risk associated with them. And for me, governance, regulation, it’s all just the way to get us to what I think is a really good state that we couldn’t imagine before.
That’s not to minimize the risk, but I think I’m fundamentally optimistic that there’s a much better world out there because of these technologies. For me, that’s what makes me excited about governance and regulation. I feel like it’s stuff in the service of getting us to good places we couldn’t otherwise get to. If I genuinely thought these technologies, on net, were just bad, that would just be depressing activity to engage in because all you’re trying to do is hammer out and prevent bad stuff from happening.
That’s fine. That’s policing, I guess. But policing seems like a tough, tough life. Whereas here, I think we’re enabling truly . . . I mean, I think a lot of the problems we face—inequality, even psychological problems, depression—I just think these technologies enable so many good things, if we can just avoid the bad. I’ll read this manifesto now to see what this guy has to say.
Luigi: If I have one extra minute to ask my last question, since you are discussing that you’re in favor of citizen governance. You have been a colleague for many years of Larry Summers, who has been appointed to the board of OpenAI. To what extent do you think that he can represent citizen governance . . . What is the signal that he sent? Reagan used to say personnel is policy. What policy is embedded in that choice?
Sendhil Mullainathan: Obviously, the good thing for us about having someone like Larry Summers is that whatever one may think of his policies, or like it or dislike it, he’s not someone you imagine is going to be easily corruptible. He’s got his own thing.
But if I put on my skeptical hat, I would think that the reason they picked Larry Summers . . . If I were head of OpenAI, here’s what I would have done. I would have said, there are a lot of people talking about government regulating us. What we really need is someone who knows how to talk to these people in government, because we don’t know how to talk to them, and has a lot of connections. And we need someone like that on the board, so they can go and do that for us. That’s how I would have picked. When you look at it that way, he’s a great choice.
Now, Larry Summers has enough of a personality that he’s going to try to influence you. That’s fine. That’s the tit-for-tat that you’re taking on. But I’m pretty sure that you are not bringing him on so that he can influence you. You would be bringing him on because you’re like, how do we keep these people in Washington off our backs so we can continue to do whatever it is we want to do? Maybe that’s too cynical.
Luigi: No, no. You’re saying it’s a great choice for OpenAI, but is it a great choice for humanity?
Sendhil Mullainathan: I think of the set of people that they could have picked for this purpose, put aside what anyone thinks of his policy preferences, I just think it’s really important . . . You remember this literature, Luigi, on board members that quickly become taken over as insiders because they become buddies.
The thing people dislike most about Larry Summers is that he has his own viewpoint and will say it and it’ll annoy people. That, you have to, in this context, be happy with. If you’re going to fill that role, filling it with someone that I feel confident is not going to be subverted has huge value, whatever you think of his policies. So, I’m happy with that.
Luigi: Thank you very much. This was lovely.
Sendhil Mullainathan: This was so fun. Thank you.
Bethany: Yeah, this was so much fun. Thank you for the time.
Sendhil Mullainathan: Yeah, I really enjoyed this.
Bethany: I did, too.
Luigi, what did you find the most interesting? Since you know Sendhil, I want to start by asking you if you found anything surprising or unexpected in what he said.
Luigi: Yeah, I actually found very surprising how optimistic he is. I took in a negative direction his claim that AI is so competitive everywhere that you basically can’t regulate. First of all, I think that that’s factually wrong. I spent a few hours after the conversation searching, and I asked a couple of colleagues, and the answer is no. In fact, what used to be OpenAI, as Elon Musk says, it’s now closed AI. It’s not that I can take the models of OpenAI and apply them myself.
And certainly, if I am in Saudi Arabia or in Iran, I cannot easily catch up with what’s happening in OpenAI. But even in the United States, there seems to be a limited amount of capacity of chips and power. So, even money might not be enough to get you what you want, let alone if you don’t have money.
The fact that OpenAI required, what, $12 billion to get started? Even Anthropic, which is a competitor, went and raised a lot of money. The biggest issue seems to be the money that you invest. My understanding is any sector with a lot of capital investments are sectors that tend to be oligopolistic. And so, the idea that competition from abroad will undermine any kind of ability to regulate, I think it’s wrong.
Bethany: That’s interesting. I don’t know. I could make an argument that the field is still fairly open today, although I don’t know, that actually would be a really interesting question as a journalist to go and investigate. But I can certainly see that what you’re arguing will become true, that it will become a winner-take-all business. The company that is doing the best AI will get the most investment, which will enable it to continue to do . . . It’ll be like Google all over again. And so, I certainly can see how I would believe in your direction, eventually.
Luigi: I think the idea that anybody can do this is wrong, and also, sometimes, it is a very important strategic choice where you might argue that it is not bad in an absolute sense, but it certainly concentrates the rent a lot.
We have seen with Web 2.0 an enormous concentration of power in rents in a few platforms. The strategic choices are very important. The internet was born much more decentralized because the design of innovation was in the hands of the government at the time. The government made a level playing field. The moment the internet was privatized, it led pretty quickly to an enormous amount of concentration. Here, we are, in a world in which some actors benefit tremendously from an open internet. It would be difficult to train OpenAI without Wikipedia. OpenAI basically free-rides on Wikipedia but then privatizes all the profits coming from that. I think that that’s part of the problem.
Bethany: Well, I actually have . . . This is a tangent. It’s not just a little tangent; it’s a big tangent. But I actually have this argument, and I’m going to put it out there because maybe our listeners or you can either agree with me or disagree with me.
OpenAI could be the savior of journalism. That does risk making it more of an oligopoly because it ups the amount of capital that would be required.
What I mean by that is that AI, in order to be successful going forward, ChatGPT, whichever agent you think of, is going to need access to the New York Times, to the Wall Street Journal, to local newspapers. It has to. If it doesn’t get access to them, then it can’t be up-to-date, and it can’t deliver the information and the research that people are going to be looking for. In a sense, the media has an opportunity to do it again and do it right this time by requiring huge subscriptions from any AI agent that wants to access their publications. And so, AI, in a very strange way, could end up being the savior of journalism. What do you think about that?
Luigi: This is, again, where the market structure is really, really important. Imagine for a second there is only OpenAI. The bargaining power is all on the side of OpenAI. If I am OpenAI, can I live without a New York Times? Yes, if I have the Wall Street Journal and the Guardian and everybody else, I can live without the New York Times. And so, if the New York Times charges me an arm and leg, I could say: “You know what? Not only will I not use your reporting, I will make sure you become irrelevant.” I could have an algorithm to try to bypass any information from the New York Times.
At the end of the day, the New York Times would be forced to sell for not very much. We know this discussion because some of that has taken place in Australia, and now, it’s taking place in Canada on the issue of news, between Facebook and Google on the one hand, and the government on the other hand. They’re trying to force Facebook and Google to pay something for all the news they get. Even with the intervention of the government, it’s not clear that things are going the right way, particularly in Canada. So, when you have a very concentrated market position, I’m not so sure that would be the solution.
Bethany: Yeah, I guess you’re right that my optimism does depend on the market structure, which brings us back to where we were.
One thing we touched upon in our conversation with Sendhil was our previous guest, Hélène Landemore, and her ideas about how to govern AI. She’s the professor of political science at Yale whom we had on to talk about citizen assemblies as a way of reforming democracy. I thought her ideas, which we touched upon briefly, that citizen assemblies are the right way to govern AI . . . I don’t know, maybe it’s unworkable, maybe it’s naive in a world where, back to these capital requirements that AI has, but I really liked her ideas.
Luigi: Yeah, particularly the potential application to firms. Think about OpenAI. You were saying you have a little bit of skepticism about this issue of fiduciary duty to humankind. Your skepticism is because you say, how do we actually implement it? First of all, what is humanity, and how does humanity speak with a common voice? The United Nations is not doing particularly well. It’s not even really representative of humanity—at least not one person, one vote.
I think your idea applied to OpenAI and in general to firms is to actually have some random sample of the population of interest—in this particular case, it would be humanity—and have humanity deliberate with the appropriate provision of information. So, create a citizen assembly of humankind that deliberates what they want from AI.
Bethany: Yeah, I think that makes a ton of sense. I don’t know if it’s doable, but I really like that application of her thinking. And, of course, I really like it when previous guests we’ve had on this podcast turn out to have relevant thoughts for other things. I like that cross-fertilization, I guess, for lack of a better way of putting it.
Luigi: I think the issue of Larry Summers . . . Sendhil was incredibly clever, and what he said is absolutely correct. What I think he left out is that Larry Summers is famous for being a super-aggressive, pro-technology guy, very much in favor of let it rip as fast as possible because it will bring so many innovations and so many benefits. Everything else is collateral damage. And I think it’s a legitimate position. I disagree, but I think it’s a legitimate position. But going from a not-for-profit with a fiduciary duty to humankind to let it rip as fast as possible is a big change.
Bethany: Well, it’s for sure going to be really interesting to see what role he plays. I actually look forward to talking about it in a year, assuming humanity is still here, and AI doesn’t destroy us that quickly. But I really look forward to circling back.
On the issue of his beliefs, I hear you on that, and that worries me, too. But this is, in some ways, one of Summers’s last acts and one that could be critical or decisive to the future of humanity.
You can believe that even ego-wise, he is going to want to get it right, and that his mindset, his pre-existing beliefs, may be that getting it right is letting it rip. But it is such a big question, and it will be carried out in such public fashion with every move scrutinized that it’s not as if he’s in the shadows pulling strings over something that nobody really thinks matters. Does that make sense?
I’m hoping that that provides a level of discipline for everybody involved in this, that their actions will be studied by subsequent generations—assuming there are subsequent generations—and people want to go down in history well.
Luigi: Maybe it’s Orwell, somebody said: “Who controls the past, controls the future. Who controls the present, controls the past.”
Bethany: Oh, interesting. I don’t think I know that quote.
Luigi: You never heard that sentence?
Bethany: No.
Luigi: It’s about the study of history. I was wondering about this because you say maybe humankind will not end with OpenAI. However, if you control OpenAI, you control the narrative, and so, you control the future. And so, future generations will study how great Larry Summers was because everything has been written by Larry Summers.
Bethany: Oh, my God. Way to conclude our podcast on a completely dismal note, Luigi, just as I was reaching for optimism. I think that’s a fantastic ending.
Luigi: As the last episode of the year, it’s a pretty good one.
Nonprofits prioritize flashy projects to induce people to give.
Why Charities Can Have Strange Spending HabitsChicago Booth’s Richard Hornbeck discusses research that finds emancipation created huge economic value.
An Economist Debunks ‘Gone with the Wind’A study of data from 17 countries provides a nuanced answer.
Can Positive Thinking Fuel Economic Booms?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.