Why AI Might Not Make You More Productive
Chicago Booth’s Chad Syverson discusses the role of good management in the age of AI.
Why AI Might Not Make You More ProductiveeamesBot/Shutterstock
Artificial-intelligence startups are attracting big funding, but are the valuations justified? On this episode of The Big Question, Chicago Booth’s Stefan Hepp, SID AI’s Max Rumpf, and Chicago Biomedical Consortium’s Michelle Hoffmann discuss how A.I. is reshaping entrepreneurship, and how founders and investors are responding.
Hal Weitzman: Startups that focus on A.I. have never held more promise, and in contrast to the rest of the startup world, funding is flooding in. To veteran investors, this can all feel a bit like the dot-com bubble of the late 1990s, which burst in 2000. So are the valuations justified? How does the rapid growth in A.I. startups affect the rest of the startup world, and what does it all mean for the incumbent tech giants? Welcome to the Big Question, the video series from Chicago Booth Review. This is a special episode primarily made as a podcast to mark the recent launch of the Chicago Booth Review Podcast. But as always for the Big Question, we’ve assembled an expert panel. Stefan Hepp is adjunct assistant professor of entrepreneurship at Chicago Booth. He’s CEO and chairman of the board at INIZIA Capital, and was previously global head of private markets at Mercer. He’s an investor in multiple venture capital funds. And he is the author of the forthcoming book “The Rise of Private Capital.” Max Rumpf is the founder and CEO of Sid, an A.I. startup that develops data infrastructure for A.I. systems. Before that, he was an A.I. researcher at ETH Zurich, where he worked with the large language models that are behind ChatGPT. And Michelle Hoffmann is executive director of the Chicago Biomedical Consortium, which promotes collaboration and commercialization among scientists at the University of Chicago, Northwestern, and the University of Illinois. Before that, she was senior vice president of deep tech at P33, a nonprofit focused on innovation and inclusive growth. Panel, welcome all of you to the Big Question. Stefan Hepp, I’m going to start with you. Can you give us sort of a high-level overview of what’s going on generally with startup funding and how A.I. startups are being treated differently?
Stefan Hepp: Well, of course. Since the end of 2021, when we had the end of the big tech boom and the big venture and private-market boom, VC funding has collapsed. It’s down between 60 and 70 percent if you look at both the numbers and the volumes invested in various funding rounds. So startups at the moment are experiencing a cash crunch, which is mitigated by the large amounts they could raise prior to the turn of the boom. So they are still OK, but money is down and money is harder to get. If you contrast that with A.I., you are in a different world. A.I. is attracting all the funding at the moment—and increasing amounts of funding. For example, when you talk about valuations, valuations in A.I. early rounds used to be in the $10 millions and then they increased to about $40 million around, on average, in 2020–21. Now, unlike other areas of VC investing where those valuations have come down significantly, in A.I. they have continued to expand, and they are now running on average about $90 million for a Round A or Round B. So that’s a big increase. It shows you that there is something going on, which is, well, we could call it against the trend.
Hal Weitzman: Right. Max Rumpf, you’re there in the heart of it, in Silicon Valley. Is that kind of what you’ve been seeing as well?
Max Rumpf: Absolutely. I think there’s a feeding frenzy on A.I. startups, especially those being able to show some sort of traction and distribution in the community. And of course that seems quite bubbly from the outside, but we’ve also seen the breakout successes of companies that have gone from basically nothing to quite significant revenues already in a very short period of time. For example, ChatGPT, which went from 0 to a 100 or a few 100 million users in the matter of a few weeks. And this is perhaps also the contrast to the dot-com bubble because I think one component that went into the dot-com bust was that the demand side really couldn’t scale quickly enough with the valuations here. But with A.I. companies, the distribution is very, very simple. Everyone already has internet access, and the technology is so simple to use through a simple chat interface that even a grandmother could conceivably learn this quite quickly. So really you have that almost unlimited upside on the other side, which is somewhat supporting these valuations or supporting the expectations of these valuations here.
Hal Weitzman: Michelle Hoffmann, let me bring you in for a second because you are obviously based in Chicago and you’re in the biomed world. So are you seeing this distinction as well between those that tout their A.I. credentials, and those that don’t have it?
Michelle Hoffmann: I would say it’s a little bit nuanced. So I think what people are very excited about are these large language models, and A.I. is obviously larger than that. So in terms of life sciences, really, or at least for biotech, people are very excited about A.I. and its ability to develop new medicines and pick winners and things like that. And again, I’m not an A.I. expert. I am a biotech expert. I dabble in A.I. as do now, apparently, the entire world. But from what I do know, the quality of your A.I. is limited to the quality of the data that you’re ingesting, and the large language models are ingesting quite a lot of data. Unfortunately, in terms of drug development, we still don’t have large data sets about how drugs work in humans in a way that allows us to predict which molecule is going to work and bypass all of the really extensive clinical trial and regulatory testing that you need. I think what is really fascinating about what A.I. is doing and is going to do in biotech is that it’s really going to help with the acquisition of large data sets. So in fact, and this is something that the University of Chicago is certainly on the forefront of, but really the idea of A.I. in science. So how do you use A.I. to really acquire more uniform data sets and then actually use the A.I. to be able to analyze them to give you more insight into the biology? And that is in fact, I believe, what is going to be driving A.I. in biotech. And so I’ll just give you a quick example. So people are always touting A.I. can be used to find new molecules and that’s the holy grail. We’ll bypass clinical trials. But if you really look at where the valuations are with, say, a company like Recursion, which I think is probably one of the better A.I. biotech companies that’s out there. One of the reasons why they have the valuation that they do is because they were able to really get complex microscopy data and use A.I. to analyze it. It meant that they had a data set that nobody else did, and that allowed them to get to biology faster than others. So I know that’s sort of a long detour around where we’re getting or around where we started, but I think the TL;DR is that if you can get new biological sets and you can use A.I. to analyze it, that’s really what’s going to be driving the new set of biotech A.I. companies.
Hal Weitzman: Stefan Hepp, in the 1990s—you and I are old enough to remember that period, the late 1990s, the dot-com bubble—every company claimed to be a tech company. There was no company that didn’t claim, and some of that was of course garbage, and they ended up being worthless. Is that happening now? Is every company claiming to be an A.I. company?
Stefan Hepp: Well, it’s happening all the time. Yeah. I mean after the dot-com boom and the tech boom, every company was a tech company. So expect that every company’s going to be an A.I. company now, but as you know, I was writing a book about how private markets evolved, and doing so, I’m quite annoyed by A.I. And the reason I’m annoyed by A.I. is you would expect after 2021, when the previous boom came to an end, now you have a peaceful and quiet period where you can write this book, and this is an epoch that is finished, and it will take some time for whatever the next boom is going to be to start. And what happens, we have ChatGPT, which is taking the world by storm, and now it seems that we are at least in that segment, having the next boom happening in venture capital. Now, this is surprising because A.I. investing is not new. We have been doing that since the ‘90s. There have been substantial investments after the great financial crisis, since 2015 onward. And those investments were not that successful for investors. If you look at public companies, public A.I. companies that have been brought to the market before 2021, they are down by 80 percent. So even public investors have nothing to write home about. And their performance of those unicorns that got listed is not better than other tech companies that got D [inaudible], on NASDAQ, or gotten an IPO. So the question one really has to ask is: What is different? How is this new A.I. different from the old A.I.? That, at least from an investor’s perspective, hasn’t done much in terms of performance. And the second question to ask is: What does this pivot toward this new A.I. mean for investors who are still sitting on a lot of portfolio companies that have not been realized? They’re already worried about valuations. They know those valuations are going to go down. I mentioned earlier that it is also more difficult to get funding, and now are they all going to be starved off for new financing because all the dollars are going to go into A.I.? That’s a big question for investors.
Hal Weitzman: You’re talking there about non-A.I. companies that got that early funding, they grew to a certain stage, now are looking for a lot more funding, and suddenly they’re no longer as sexy as they were.
Stefan Hepp: Yeah. That’s one area there, or one aspect. The other aspect, and maybe Max has a perspective on this, is that they are A.I. companies that are not cool anymore. And the question is, if I am an uncool A.I. company, I’m sitting in a venture-capital fund, can I reform my business model or my approach to become cool again? Or am I a walking-dead portfolio company that is soon going to be written off? And I’m interested in his perspective because that’s the question where money is riding on.
Hal Weitzman: Well, let’s turn to Max Rumpf because Max, you’re the man most best placed to talk about what’s cool in terms of A.I. startups. Michelle talks about what’s cool in biotech, what’s exciting there. What about in Silicon Valley? What’s attracting investment? What are investors most excited about in terms of A.I.?
Max Rumpf: I think there’s a really important distinction to make just from a technological perspective. So before 2020 is the time I’m going to use here, most of A.I. was highly specialized. So you could create A.I. that would solve a very, very specific task, and they could get quite good at solving these specific tasks. And these are the companies that Stefan has talked about as the old A.I. companies, but that’s usually very expensive. And if the task changes slightly, that technology can become obsolete. And what we now have with these new large language models, it’s a technology that is incredibly good at a very, very wide range of things. So it can write an email. It can do pretty much anything. So you now have, I would say somewhat of a general-purpose A.I., and a lot of this, what this means for companies that have usually done task-specific A.I. is that they’re now quite badly placed. I think one good example here is DeepL, which is a translation website actually very successful out of Germany, also raised at above a $1 billion in valuation a few years back. And what they effectively do is translation. And now you have, for example, ChatGPT, and translation is just one of a dozen features that it can do. And now a lot of companies are seeing that their entire product that they built before and that they of course spent hundreds of millions of dollars building are now a feature of a much, much larger suite and have become commodified in some sense, and I think it’s going to be very, very difficult for these companies to now spin themselves as something new and exciting. And the best bet for them is to leverage their existing distribution or any unique data sets they have to get an advantage over the others.
Hal Weitzman: Just to finish that thought, that was the pre-2020 ones. What about the post-2020 ones? I mean, what’s exciting, what is cool? To go back to the question.
Max Rumpf: So cool is, I think right, we are very much on the beginning of the curve here when it comes to capabilities. So it really started off with completion more than this chatty interface. And now the newest generation that we’ve seen just released over the last two months is really seeing how much you can take the human out of the loop. So looking at what happens if you let them self-prompt and self-iterate and then move along in that curve. And because there’s such an explosion on the capability side, we see that a lot of stuff gets obsoleted very, very quickly. And if you’re a company and you’re developing a product for an end customer, usually these obsoletion cycles are incredibly short now. If you started before last December, you most likely did not have a chat component yet, and it was obsolete because it was missing that. And then in February, you wouldn’t have these agentic systems that could complete entire tasks without human intervention if you started later on.
Hal Weitzman: So when you say that obsolescence is very short, how short?
Max Rumpf: It’s on the order of a single month, sometimes weeks in the space.
Hal Weitzman: So you can develop a company, get funding, and find out a month later that your entire business model has been made obsolete. That’s what you’re saying?
Max Rumpf: Yeah. Or at least that the technology that you’re building on is obsolete. And that someone who would take the same idea and the same business could come and run you over.
Hal Weitzman: OK. Michelle Hoffmann, you had a thought about your world and how it’s been affected.
Michelle Hoffmann: I think it’s really important, again, as somebody who sits on the side of this is to, and we’ve said this for years, even before A.I. sort of captured the imagination, at least in healthcare and life sciences, really the people that have the data are the people that have the value. It is not the same. I mean, I can’t tell you how many, “Is ChatGPT going to replace your doctor?” That type of thing. When you are dealing in healthcare and you’re dealing in life sciences, and you are dealing in an area that is regulated and where judgment matters, I think that there tends to be a lot more rush to say these models are going to be able to do more than they really can. I mean, I know that when ChatGPT came out, or maybe it was GPT-4, I don’t remember, with the long white paper that OpenA.I. came out with, people were saying, oh, well, look, they can do drug discovery. And really what they were doing is they were actually just taking molecules that were known to bind certain proteins and they said that they could use ChatGPT to do that. Very similar to if you searched enough webpages you would find this. And that’s great, but that is not something that you can do for a regulated industry. And so just going back to Max’s point, because I think this is important for healthcare and life sciences, is that you are dealing in most cases with data that needs to be more proprietary, more protected, and that is where I think the value really comes. As the models actually progress and the models become more sophisticated, I think you’re going to find for the people that actually have data that is protected and data that is proprietary, as well as they have a certain set of systems that preserves judgment, I think those models are going to be, or at least those business models are going to actually be preserved from obsolescence a little bit more. That’s just my hypothesis.
Stefan Hepp: Michelle, if I may ask you a question about that, because I find this is a very important point. The initial model world of A.I., generative A.I. was based on that premise. I have access to huge data, huge computer set and I can, with huge dollars invested, train an algorithm, which was the hope gives me a mode against competitors once I have done that. Now in technology, as Max will surely elaborate, that thesis is crumbling. Are you saying that this thesis holds water in biotech, and referring to your points you made about regulation and data protection, is this a thesis that can actually be implemented? Is it investable, or are you saying that no, it’s not yet investable because we haven’t reached the level of data security and protection yet to really develop those models on proprietary data that we need in biotech to have real impact?
Michelle Hoffmann: Let me make two distinctions. So there’s healthcare, which is looking at people’s healthcare data, and then there’s biotech, which is how we develop drugs. And there is overlap, but I do want to say that for healthcare A.I., so how am I going to look at somebody’s . . . if you come in and you give your doctor your chart data, they’re going to predict that you’re going to have a problem with your kidney or your liver. That’s healthcare. And so because of the way healthcare data is regulated, it will almost always undoubtedly be proprietary. It doesn’t mean that companies don’t have access to it, but it’s actually very difficult. You’re not going to be able to get some big—and I know I’m using the word wrong—some big data lake the way that they do with the internet and do these large language models. And so I do believe that those business models are actually viable and going to continue to be viable for the people that can figure out how to get around in an ethical and sound way. People who can navigate around all of the regulations we have about healthcare data. For biotech, I think that there, again, the types of data that people need to train models on. Can I look at chemical space and find something that I know is going to go through three phases of clinical trials and be effective? You are not going to be generating that data any time soon. I could be wrong, but not in the next five years. What you can do is you can get access to a lot of biological data and you can make hypotheses from there. So hopefully that answers your question, but happy to dig in more.
Hal Weitzman: I have a question for you, Stefan, because you are an investor. So Max described this world where a business model is obsolete in a month. How do you as an investor think about that? I mean, what are you doing?
Stefan Hepp: Well, as I said before, for an investor and not only for a book author, A.I. is in a way bad news because it drains resources from existing portfolio companies, and they may not be—
Hal Weitzman: Right.
Stefan Hepp: Like the cool A.I. and therefore—
Hal Weitzman: In other words, you’ve already invested in all the stuff. It’s no longer exciting.
Stefan Hepp: I expect write-downs to be higher than the consensus at the moment probably sees that. So we will have more.
Hal Weitzman: Just to be clear, that means you mean there’s a whole load of startups out there that—
Stefan Hepp: That won’t survive, yeah.
Hal Weitzman: Because all the money’s been drained to A.I.?
Stefan Hepp: Yeah. Now how are investors dealing with the new world of A.I., where as Max says, you don’t know what’s going to be cool in two months. They do what they always do when they don’t know that is: they spray and pray. You do many deals, and in the past, you didn’t do that because of the big dollars involved to train the algorithm, the data. So there you made big bets. Now you made many bets, and that is a normal venture-capital business model. So there is nothing wrong with that. You have 20, 40 bets in your portfolio and you expect at least half of them not to go anywhere in the long run. But what you also hope for is that there are some who may make it really big, and we are not talking five times your money; we are talking 50 times your money. So one question I have for Max is: What areas are those going to be? And I want to add something. The initial thesis was that generative A.I. is going to hurt Google and Facebook. I’m sure you all have seen that in the news, the code-red memo from Google. Yeah. The traditional search-engine business is going to be in decline. If you look at the stock market, those FANG stocks, the tech stocks, the traditional ones have done very well in recent months. There are also big investors in that area. So as a startup investor, you ask yourself two questions. One, is there room for startups to do really well, become really big? Or is this going to be a technology area where the old behemoths continue to rule the turf? And secondly, if that’s the case that there is room for startup, where are the areas where those startups may be found? What type of models in the application space versus the actual model space are those areas where it’s close as possible?
Hal Weitzman: OK, well, that’s two questions. Let’s deal with the first one about the tech giants. Max, what’s your view on that? Are the tech giants going to be eaten up? Are they going to be disrupted or are they going to be able to absorb and kind of invest and grow themselves?
Max Rumpf: The most important thing with the big tech giants is that they have an absolute stranglehold on distribution. And you know, you can be very, very confident and bullish on open A.I.’s ability to push, for example, ChatGPT. But if Google places it into its regular search engine, they have an incredible and almost insurmountable distribution advantage there. I think where people expected them to have a huge advantage but where that advantage has not really appeared is on the model front. So everyone expected Google and Microsoft and the others to be able to outperform everyone else because they could outspend and they had the larger data sets for their tasks and then also provide the largest models. And this trend is really not rung true. So the open-source community, those people that take the models that, for example, Facebook made public, and then tweak them to certain tasks and to do certain things, they’re doing with a few hundred dollars of compute what Google and Microsoft are struggling to do internally with 10s of millions if not 100s of millions [of dollars]. And that has really shown that there’s a lot of also a creativity part that goes into really doing that model capability. And what it means in the medium or longer run, if the capability is public and there’s no real ability to accrue that capability inside a single company, then everyone will try to start and come for the margins, right? Because everyone’s fighting with the same weapons, and any startup is fighting with the same weapons that Google or Microsoft has internally. So then of course there’s over the medium or longer term, competition for the margin. Your margin is my opportunity, so to speak.
Hal Weitzman: OK. And so let’s turn to the second question you posed, Stefan, which was basically back to this idea of what’s cool, what are the areas that have the most promise, do you think?
Max Rumpf: We touched on the model layer itself being quite finicky, and it doesn’t look like anyone else there really has made money and has any sustainable differentiation, and I believe this trend to continue. And then of course you have the layer above that, which is the application layer. And I think we’re going to see a lot of creativity and a lot of money to be made there. If you own the customer and if you own that relationship, then of course there will always be a way to somehow monetize that. And then, I think, going one step below, you have the infrastructure and the data layer, and that’s where also a lot of the investment and especially the incredibly highly priced rounds are happening is in the infrastructure and the tooling that everyone else needs to actually build these apps and build this stuff—and what will eventually also support the ecosystem and will support also the large companies in delivering this. So really most of the value accrues to either the application or the data layer in funding, probably also later on in business.
Hal Weitzman: I’m just going to remind people listening or watching this that Max, you do have an interest in this because you’re an A.I. infrastructure guy, so we’ll remember that when we hear your comments. Michelle, do you have a view on this? You made your point strongly about data, and the data is where the prize is, but are there some areas you think are . . . that show a great deal of promise in terms of biotech A.I.?
Michelle Hoffmann: I mean, there’s many areas of promise.
Hal Weitzman: So talk us through a couple. The ones that perhaps have missed our attention.
Michelle Hoffmann: I mean I think it goes back to the same theme, that A.I. is good for being able to take very large data sets and being able to gather insights. And maybe I wasn’t very clear ahead of time, but there’s medicine and there’s biology. So biology is what we study in the lab and medicine and health is what happens in the hospital in real life. And a lot of times what we try to do is we develop drugs based a little bit on biology, and we start with biology and we hope it translates to the medicine and the health. What A.I. allows us to do is get more data on that biology phase and not just on but in new ways. So for example, there’s Verge Genomics. It’s another A.I. company. Really, really interesting. And again, it’s not that they were finding new molecules, and maybe I’m beating this to death because this is what I hear a lot about is, or at least this is where the valuations were in biotech, at least in the last couple of years, is we’re going to use A.I. to find new chemical molecules and we’re going to deliver you new drugs to your doorstep. But I think what Verge did is what they were able to do is use different kinds of biological systems, these human stem cells or induced pluripotent stem cells, and they were able to do a different kind of biology and get a lot of data that you could not get in the current standard animal models. And they were able to analyze it in different ways because of A.I. that you just could not do with, again, the current standard animal models. And from that, they were able to find a clinical candidate for a neurodegenerative disease. It’s now in the clinic, and they went in really, really fast. And so again, I go back to: they had a novel way of collecting data or a novel way of generating data, novel ways of collecting data, and then they used the A.I. to just really give it that lift. And it really goes back to my original thesis, which is, and I think, correct me if I’m wrong, Max, but this, that to me is infrastructure. I mean, I understand there’s obviously a lot of computer power that needs to go in it and a lot of complexity there, but that data layer is to me the thing that’s really going to be driving what you can see in A.I. and value when it comes to tech.
Hal Weitzman: OK, thank you. I want to come back, Stefan, to the role of the venture capitalist because you talked about, what was it, “spray and pray.” So it sounds like gambling a little bit. So I mean does, do VCs have a role in weeding out the worst ideas and elevating the best ideas? To what extent are they a filter and not just a kind of like a roulette wheel?
Stefan Hepp: Well, they are a filter. The problem with venture capital is even when you filter what you know is, or what you hope for is that in this sample of companies that you have in your portfolio, a few are going to make it big. And here, this is also a question I have maybe for Max. In venture capital, traditionally you had two risk areas. One risk area is technology. Give you an example: SpaceX. You have to make this rocket fly. You have to make the used part land again on something which has a size of a carpet. Yeah. Obviously, that is technology risk. You manage to do that, you have ready customers for satellite launches and other things. So finding your customer is not the issue. Now, compare that to Facebook. Connecting a few people on a site where they can share their picture has no technology risk. You don’t have to invent something to do that. What you do not know is whether only your college mates are going to find that cool or half of planet earth. And as Mark Zuckerberg proved, it is half of planet earth. But that part, you do not know. So as soon as you go into an area where consumer adoption is a major factor of success, you just don’t know. The people who funded TikTok did not know this is going to be that popular. They were knowing that in that area there will be new companies coming up that will become popular, and they looked for people whom they trust with their competence, their idea, and their drive to develop those business models in their startups. But who is going to get the gold medal at the end? Who is going to cross the finishing line? You do not know. It’s a very different type of investing than if you invest in technology risk. There, you look at engineering capability. You really have to look at what are my chances of success for that rocket to fly. Question for Max is, the models in the startup world that are going to make it, am I taking more technology risk or am I taking more find-your-customer risk?
Max Rumpf: So that, again, depends, right, on what you’re building. I think on the application side, currently actually the technology risk overweighs, not because building a service on this new LLM stack is so different, is so difficult—actually, I would argue it’s quite a bit simpler. You can take a very powerful model, and it can most likely do most of the things you’d want to achieve. But the question is that until the time that you have a product and an application to market, most likely the capabilities that you built it on are no longer state of the art. And someone who just starts that much later into the arena and uses those newer capabilities can offer a product that is an order of magnitude better for the end user. And that’s, I think, where the large problem lies there. It’s somewhat this mix of: Am I building on the right technology? And the technology is moving so quickly on the model side, right, on the capability side that you’re almost definitely going to build it on the wrong stack. And that’s why my advice here would be to very much focus on what you believe you can bring as an added value. What, to use a Jeff Bezos quote here, “Focus on what makes your beer taste better” and be very agnostic about the technology that you’re actually using to achieve it. So if it turns out that this new model or this new way of looking at this becomes better, that it’s almost an interchangeable component, and that you do not have to completely re-architect everything. And this is, I think, right, where we’re walking into.
Hal Weitzman: It sounds a lot like old-fashioned sort of startup advice, like focus on actually getting customers and getting revenue and not on how cool the technology is.
Stefan Hepp: It does, but this has big implications, what we are looking at. If you invest in technology, you expect that that technology creates a mode versus your competitors. I mean, going back to SpaceX, once you manage to build that rocket, it’s not Stefan Hepp coming along and within a month or two, my rocket’s flying as well. And as we saw with Richard Branson’s company, some competitors who try fail because it’s not that easy. So you have a competitive advantage of protection. If as Max says this becomes a commodity, I don’t have that. So what am I going to do as an investor? I would say if it is technology risk, I run away. Why? Because I cannot protect my investment. I cannot create competitive advantage. If it is not technology risk, but it is more find-your-customer. So if it lights up. Will people buy it? That type of question, then yeah, you can invest, but you do what I call spray and pray, meaning to build a broad portfolio to make sure that those things that win, hopefully one of those or several of those are in your portfolio.
Michelle Hoffmann: OK, yeah, no, I mean think this is, but this is the fundamental question with venture investing. I mean, I may get over my skis here, but this is sort of the fundamental thing with the, what’s the famous book? “Power Law,” which is all about how do investors in the Valley think. But I think it’s the basic difference between whether you’re going to have a consumer-directed model, in which case you really need to find markets of scale, versus if you’re going to do a deep-tech model, where you have IP and technology that is your protective moat. And my personal—and now listen, I’m a PhD. I believe in technology. I have been doing it for 20 years and I personally believe being old enough to have seen the dot-com boom and other booms as well, that when you invest in the actual technology and when you understand what the technology does, there’s longevity in the investment in a way that I think just even look at the social networks, they have their moment. They do really well because they’re scalable and then something else comes along. So that’s my personal belief.
Max Rumpf: If I have one last comment on, I think what Stefan said. I think you’re totally right, this idea of someone else being able to come along and copy you and effectively being able to offer the same product is one of the largest risks here. And I think one of the most emblematic examples is ChatGPT. So the raw model behind it was trained, of course, on a lot of data, but what made it actually usable for humans was that they then had humans that rated outputs and they fine-tuned it. RLHF is the process to make this very, very suitable to human use. And that cost them hundreds of millions of dollars just to get right and just to have the humans label that. And then there was an open-source project, and what they did was they just scraped ChatGPT in the answers, and then they copied it and put it on top of an open-source model. And they did that for I think $530. So they could take the technology that OpenA.I. had developed internally and they copied it functionally identically for a few hundred dollars. And it’s very, very unclear if there’s going to be any IP protections around this, because of course the content that went into the OpenA.I.’s model is also copyrighted to a large degree. So the question is: Do they even own the copyright on the outputs? And in that sense, can they legally protect it because technologically they can’t.
Hal Weitzman: Well, unfortunately that’s all we have time for for this episode of the Big Question. But I’m sure we’ll come back to this topic a lot more in future episodes. My thanks to our panel: Stefan Hepp, Max Rumpf, and Michelle Hoffmann. For more research, analysis, and commentary, visit us online at chicagobooth.edu/review. And join us again next time for another Big Question. Goodbye.
Chicago Booth’s Chad Syverson discusses the role of good management in the age of AI.
Why AI Might Not Make You More ProductiveThe largest institutions have increasingly favored green stocks in recent years as small investors have done the opposite.
Big Finance Is Going Green. Smaller Finance, Not So MuchResearch finds that banks prefer to issue mortgages for newer and more standardized housing stock.
Line of Inquiry: Anthony Lee Zhang on Why Buying Your Unique House May Be a ChallengeYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.