This article was written by a human being who click-clacked on a keyboard until she finished a draft and sent it to an editor. But more and more, computers are taking over. In fact, the Associated Press has used “automation technology” to cover college sports since 2015.

The idea isn’t new—humans have obsessed over artificial intelligence (AI) since at least the 18th century, when the “Mechanical Turk” hoax led many to believe that a machine could play chess against a person and win. About 250 years later, a machine can play chess against a person and win—every time.

"In the book, there’s a chapter on Florence Nightingale, who is, believe it or not, the first person to record health care data. It was her idea to collect data and look for patterns."

AI is more powerful than ever, owing to the vast data that humans record on a daily basis. But the core concepts are based on mathematics that hasn’t changed in tens, or in some cases hundreds, of years, according to Nick Polson, University of Chicago Booth School Professor of Econometrics and Statistics, and James Scott, of the University of Texas at Austin. They wrote a book to tell us how it works. In AIQ: How People and Machines are Smarter Together (St. Martin’s Press), Polson and Scott deliver a decidedly human account of the stories behind the math that supports all our favorite algorithms. In the interview below, Polson discusses the origins and mechanisms of artificial intelligence—and  what it means for humans.

Artificial intelligence is about big data these days—but it wasn’t always that way.

Yes, it used to be about creating rules. In the 1950s, 1960s, 1970s, people wrote rules for computers to follow. Now computers process data, and learn the rules from that.

What allowed for that to change?

In the book, there’s a chapter on Florence Nightingale, who is, believe it or not, the first person to record health care data. It was her idea to collect data and look for patterns. She used that to improve everybody’s health. She used to give talks at the Royal Statistical Society and became the first woman member in 1858.

So it’s about data, actually having it, and it’s about computational speed. When I began, data sets weren’t that big, and they couldn’t be processed that fast. Now, for example, YouTube—it’s one of the world’s greatest engineering feats. How can it manipulate so many videos so quickly? More data, and better hardware.

How do you define deep learning?

That’s what I work on—it’s a hierarchy of rules that are ‘learned’ from a data set. You give a machine a test data set, like 100,000 images, and you have it learn the rules ahead of time. Then you apply it.

It’s important to note, computers learn ‘that’ something is so, not ‘how’ something is so.

Is there a limit to deep learning?

It seems not, it seems not. It’s how machines learn to play chess. It’s how they learn to process images, to write. Even Game of Thrones—a fan used a deep learning algorithm to create “George A. I. Martin,” or the machine version of George R. R. Martin (the novelist whose books inspired the famous HBO TV series), and had it write six chapters of the next book. That was based on words and patterns in the previous book, that’s all.

Then there’s Spotify, which uses deep learning to figure out which songs you like. It’s rather good. After 20 minutes, it knows what you like, and to be honest it’s a bit too good.

What do you mean, “too good?”

I call it ‘infinite content.’ Machines can create so much for us, right? All the Netflix shows, the Spotify recommendations, the Facebook newsfeed. There’s so much content out there, and people are spending too much time on it. It changes the mind, and researchers say that you can see the effects of dementia from it. In China, there’s a (video) game that is so addictive that its maker, the tech company Tencent, decided to limit the amount of time that children can play it.

On the other hand, machines like information, it’s their lifeblood.

It’s how they get better, right?

In class, I showed a video of a robot doing a backflip. A student said, “Well, there’s a human, isn’t there? With joysticks, doing that?” I said no, it was an algorithm. It’s a bit like playing chess—there’s a huge number of combinations of velocity and space, and you have to get that exact correct combination. You do it a billion times, you fail a billion times, and when you’ve done it a billion and one times, you have that little matrix to reproduce it.

Humans, we tend to get it wrong and never learn, right?

But humans are collecting the data and writing the algorithms. What’s the upshot of that?

We talk about that in the book—that ultimately, the algorithm is only as good as the data you give it. If the input is biased, the output is biased too.

It reminds me of the 10,000 Hour Rule—that anyone can become world-class with 10,000 hours of deliberate practice.

Yes, although, machines are so much better than 10,000 hours. For example, in radiology—23 top radiologists were asked to decide whether certain images contained cancer. Their answers were put up against an algorithm, and the algorithm beat their answers with ease. Humans are doing 10,000 hours, computers are doing 10 billion hours.

There’s no way we can outpace that.

No, unfortunately not. Sorry.

What are machines not good at?

They’re not good at drinking cups of coffee, right? Picking up, tilting hot liquid.

--By Chelsea Vail