The first AI enabler is the decades-long exponential growth in the speed of computers, usually known as Moore’s law. It’s hard to convey intuitively just how fast computers have gotten. The cliché used to be that the Apollo astronauts landed on the moon with less computing power than a pocket calculator. But this no longer resonates, because . . . what’s a pocket calculator? So we’ll try a car analogy instead. In 1951, one of the fastest computers was the UNIVAC, which performed 2,000 calculations per second, while one of the fastest cars was the Alfa Romeo 6C, which traveled 110 miles per hour. Both cars and computers have improved since 1951—but if cars had improved at the same rate as computers, a modern Alfa Romeo would travel at 8 million times the speed of light.
The second AI enabler is the new Moore’s law: the explosive growth in the amount of data available, as all of humanity’s information has become digitized. The Library of Congress consumes 10 terabytes of storage, but the Big Four tech firms—Google, Apple, Facebook, and Amazon—collected about 120,000 times as much data as this in 2013 alone. And that’s a lifetime ago in internet years. The pace of data accumulation is accelerating faster than an Apollo rocket; in 2017, more than 300 hours of video were uploaded to YouTube every minute, and more than 100 million images were posted to Instagram every day. More data means smarter algorithms.
The third AI enabler is cloud computing. This trend is nearly invisible to consumers, but it’s had an enormous democratizing effect on AI. To illustrate this, we’ll draw an analogy here between data and oil. Imagine if all companies of the early 20th century had owned some oil, but they had to build the infrastructure to extract, transport, and refine that oil on their own. Any company with a new idea for making good use of its oil would have faced enormous fixed costs just to get started; as a result, most of the oil would have sat in the ground. Well, the same logic holds for data, the oil of the 21st century. Most hobbyists or small companies would face prohibitive costs if they had to buy all the gear and expertise needed to build an AI system from their data. But the cloud-computing resources provided by outfits such as Microsoft Azure, IBM, and Amazon Web Services have turned that fixed cost into a variable cost, radically changing the economic calculus for large-scale data storage and analysis. Today, anyone who wants to make use of their “oil” can now do so cheaply, by renting someone else’s infrastructure.
When you put those four trends together—faster chips, massive data sets, cloud computing, and, above all, good ideas—you get a supernova-like explosion in both the demand and capacity for using AI to solve real problems.
AI anxieties
We’ve told you how excited our students are about AI, and how the world’s largest firms are rushing to embrace it. But we’d be lying if we said that everyone was so bullish about these new technologies. In fact, many people are anxious, whether about jobs, data privacy, wealth concentration, or Russians with fake-news Twitter-bots. Some people—most famously Elon Musk, the tech entrepreneur behind Tesla and SpaceX—paint an even scarier picture: one where robots become self-aware, decide they don’t like being ruled by people, and start ruling us with a silicon fist.
Let’s talk about Musk’s worry first; his views have gotten a lot of attention, presumably because people take notice when a member of the billionaire disrupter class talks about AI. Musk has claimed that in developing AI technology, humanity is “summoning a demon,” and that smart machines are “our biggest existential threat” as a species.
You can decide for yourself whether you think these worries are credible. We want to warn you up front, however, that it’s very easy to fall into a trap that cognitive scientists call the “availability heuristic”: the mental shortcut in which people evaluate the plausibility of a claim by relying on whatever immediate examples happen to pop into their minds. In the case of AI, those examples are mostly from science fiction, and they’re mostly evil—from the Terminator to the Borg to HAL 9000. We think that these sci-fi examples have a strong anchoring effect that makes many people view the “evil AI” narrative less skeptically than they should. After all, just because we can dream it and make a film about it doesn’t mean we can build it. Nobody today has any idea how to create a robot with general intelligence, in the manner of a human or a Terminator. Maybe your remote descendants will figure it out; maybe they’ll even program their creation to terrorize the remote descendants of Elon Musk. But that will be their choice and their problem, because no option on the table today even remotely foreordains such a possibility. Now, and for the foreseeable future, “smart” machines are smart only in their specific domains:
- Alexa can read you a recipe for spaghetti Bolognese, but she can’t chop the onions, and she certainly can’t turn on you with a kitchen knife.
- An autonomous car can drive you to the soccer field, but it can’t even referee the match, much less decide on its own to tie you to the goalposts and kick the ball at your sensitive bits.
Moreover, consider the opportunity cost of worrying that we’ll soon be conquered by self-aware robots. To focus on this possibility now is analogous to the de Havilland Aircraft Company, having flown the first commercial jetliner in 1952, worrying about the implications of warp-speed travel to distant galaxies. Maybe one day, but right now there are far more important things to worry about—such as, to press the jetliner analogy a little further, setting smart policy for all those planes in the air today.
This issue of policy brings us to a whole other set of anxieties about AI, much more plausible and immediate. Will AI create a jobless world? Will machines make important decisions about your life, with zero accountability? Will the people who own the smartest robots end up owning the future?
These questions are deeply important, and we hear them discussed all the time—at tech conferences, in the pages of the world’s major newspapers, and over lunch among our colleagues. We can’t tell you the answers to these questions. Like our students, we are ultimately optimistic about the future of AI. But we’re not labor economists, policy experts, or soothsayers.
We can tell you, however, that we’ve encountered a common set of narratives that people use to frame this subject, and we find them all incomplete. These narratives emphasize the wealth and power of the big tech firms, but they overlook the incredible democratization and diffusion of AI that’s already happening. They highlight the dangers of machines making important decisions using biased data, but they fail to acknowledge the biases or outright malice in human decision-making that we’ve been living with forever. Above all, they focus intensely on what machines may take away, but they lose sight of what we’ll get in return: different and better jobs, new conveniences, freedom from drudgery, safer workplaces, better health care, fewer language barriers, new tools for learning and decision-making that will help us all be smarter, better people.
Take the issue of jobs. In America, jobless claims kept hitting new lows from 2010 through 2017, even as AI and automation gained steam as economic forces. The pace of robotic automation has been even more relentless in China, yet wages there have been soaring for years. That doesn’t mean AI hasn’t threatened individual people’s jobs. It has, and it will continue to do so, just as the power loom threatened the jobs of weavers, and just as the car threatened the jobs of buggy-whip makers. New technologies always change the mix of labor needed in the economy, putting downward pressure on wages in some areas and upward pressure in others. AI will be no different, and we strongly support job-training and social-welfare programs to provide meaningful help for those displaced by technology. A universal basic income might even be the answer here, as many Silicon Valley bosses seem to think; we don’t claim to know. But arguments that AI will create a jobless future are, so far, completely unsupported by actual evidence.
Then there’s the issue of market dominance. Amazon, Google, Facebook, and Apple are enormous companies with tremendous power. It is critical that we be vigilant in the face of that power, so that it isn’t used to stifle competition or erode democratic norms. But don’t forget that these companies are successful because they have built products and services that people love. And they’ll only continue to be successful if they keep innovating, which isn’t easy for large organizations. Besides, we’ve read a lot of predictions that the big tech firms of today will remain dominant forever, and we find that these predictions usually don’t even explain the past, much less the future. Remember when Dell and Microsoft were dominant in computing? Or when Nokia and Motorola were dominant in cell phones—so dominant that it was hard to imagine otherwise? Remember when every lawyer had a BlackBerry, when every band was on Myspace, or when every server was from Sun Microsystems? Remember the dominance of AOL, Blockbuster Video, Yahoo, Kodak, or the Sony Walkman? Companies come and companies go, but time marches on, and the gadgets just keep getting cooler.
We take a practical outlook on the emergence of AI: it is here today, and more of it is coming tomorrow, whether any of us like it or not. These technologies will bring immense benefits, but they will also, inevitably, reflect our weak spots as a civilization. As a result, there will be dangers to watch out for, whether to privacy, to equality, to existing institutions, or to something nobody’s even thought of yet. We must meet these dangers with smart policy—and if we hope to craft smart policy in a world of “hot takes” and 280 characters, it is essential that we reach a point as a society where we can discuss these issues in a balanced way, one that reflects both their importance and their complexity.
Nicholas Polson is Robert Law Jr. Professor of Econometrics and Statistics at Chicago Booth. James Scott is associate professor of statistics at University of Texas at Austin. This is an edited excerpt from their book, AIQ: How People and Machines Are Smarter Together, reprinted with permission from St. Martin’s Press.