You don’t need to know how to build an algorithm to understand how it works, according to professor Sendhil Mullainathan.
- By
- July 15, 2019
- Technology
As a business professional, how much do you need to know about artificial intelligence?
About as much as you know about bikes, according to Sendhil Mullainathan, Roman Family University Professor of Computation and Behavioral Science. You probably don’t know how to build or repair a bike, but you know the use cases. You know what makes them appealing and where they’re most practical. In other words, you have a functional understanding.
Mullainathan, who teaches a course on artificial intelligence at Booth, led a workshop at Management Conference 2019 that provided a functional understanding of A.I. Below are four of the top insights he shared.
If you read a movie review, you know instantly if it’s positive or negative based on words like dazzling, cool, and gripping, or cliché, slow, and awful. If you built an algorithm to do this, you might input these types of words and call it a day.
But that wouldn’t work, Mullainathan said, because algorithms that rely on our intuition are wildly inaccurate. In one experiment, for example, an algorithm couldn’t even evaluate 82 percent of its movie reviews because they didn’t include any of the algorithm’s limited pool of keywords. The problem, he said, is that we don’t have as much knowledge as we think we do.
“Many of us think we know why we do what we do or how we do what we do, especially tasks that are effortless. How do I decide someone is nice? You think you know, but you don’t,” he said. “Part of the lesson A.I. teaches us is about ourselves—that the big algorithm in our heads is more inscrutable to us than you can possibly imagine.”
Relying on personal experience to figure out what words made for a good or bad movie review yielded poor results for A.I. What did work? Taking a data-driven approach. “Let’s collect data on a bunch of movie reviews and just look at the data,” Mullainathan said.
While this idea might seem obvious in 2019, it wasn’t so clear back in the early ’90s. It even seemed dumb to a lot of people, Mullainathan said, because it treated simple tasks as though they were scientific endeavors. But looking at the data revealed which words were actually the most common in good and bad reviews—and they weren’t the words we’d expect.
For example, one of the most diagnostic words for a positive movie review is “still”—a word reviewers use when they’re trying to offset a small quibble when they’re pleased overall. Meanwhile, the exclamation point, something you might think would express excitement, was highly diagnostic for negative reviews.
“Isn’t it funny that that’s a surprise to us?” Mullainathan said. “It means we’ve read millions of reviews and used exclamation marks to judge if they were negative, yet we don’t have that insight ourselves.”
In fall 2018, Amazon made headlines for an A.I. recruiting tool that disproportionately rejected applications from women. Many news stories focused on the “bias” of algorithms. But, as Mullainathan pointed out, an algorithm doesn’t have an intention or motive. “It isn’t able to look for anything except the specific variable that you would have looked for,” he said.
Amazon built its algorithm based on its own dataset of resumes that the company had evaluated over the years. In other words, the algorithm was trained to select the same candidates that the hiring managers had.
“I often think of the variable as the key thing that lots of people don’t kick the tires on,” Mullainathan said. “I don’t mean ask once, what’s the variable? I mean keep digging. If someone says, ‘I’ve got an algorithm that lets you find good employees,’ ask, ‘What’s good? Is that what I think good is? What are the ways in which I think someone might be good, but that don’t show up in your metric?’ Everything in that difference is something the algorithm could be predicting.”
A.I.’s greatest potential isn’t in replacing humans, Mullainathan said. Instead, its brightest future lies in helping us change both business and the world for the better.
A few years ago, a team at MIT developed an A.I. program that applies an algorithm to video footage of infants. The algorithm amplifies subtle color variations in the skin, allowing doctors to monitor babies’ vital signs without touching them—something that can be stressful for premature infants.
“Algorithms don’t see like us, and that’s a good thing,” Mullainathan said. “It means they have the capacity to see things that we cannot possibly see. There is no human doctor who can look at a baby and tell the pulse just by looking at the skin. But here, we can.”
We should take advantage of the ability of algorithms to do things that we can’t, Mullainathan said. But he believes this opportunity has been largely neglected.
“We tend to look for A.I. applications in areas where humans already do things well,” he said. “Instead of trying to replicate human intelligence, look for things humans couldn’t even imagine doing.”
An anniversary celebration at the Hong Kong campus highlighted Booth’s impact in the Asia-Pacific region, lessons in leadership, social innovation, and more.
Commemorating Five Years of Scholarship in Hong KongExplore Booth faculty insights on mask mandates, vaccine acceptance, and more.
COVID-19 Thought Leadership Digest: April 21As economists reflect on the lack of representation in their field, a Booth professor is collaborating with his peers to fight for change.
Fostering Diversity and Inclusion in EconomicsStay informed with Booth's newsletter, event notifications, and regular updates featuring faculty research and stories of leadership and impact.
YOUR PRIVACY
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.