The Roman Family University Professor of Computation and Behavioral Science at Chicago Booth explains why AI’s impact on equity will ultimately be a social question, not a technological one.
- August 07, 2019
- CBR - Economics
The Roman Family University Professor of Computation and Behavioral Science at Chicago Booth explains why AI’s impact on equity will ultimately be a social question, not a technological one.
There’s a lot of press around algorithms being promoters of inequity or of bias. But we know from the behavioral-science literature that human beings are quite biased. We don’t just look at objective data; we also add our own internal biases. Study after study has demonstrated that when viewing a man and a woman doing a task at the same level of performance, people will make inferences about the woman they don’t make about the man. The mind just adds its own bias. The algorithms, while they may have other problems, tend not to add their own biases. They tend to reflect whatever is in the data.
The places where people are most worried about bias are actually where algorithms have the greatest potential to reduce bias. Take hiring—an issue where we’re worried that the underlying data may be biased, so the algorithm may be biased. And that’s fair. But hiring is also the place where humans add a tremendous amount of bias in terms of which résumés to look at, which person to hire conditional on the résumé, etc. And that bias is in addition to whatever is in the data.
The same is true of a setting such as criminal justice. It’s reasonable that people are worried algorithms in the criminal-justice system might add bias, and we should worry about that and find ways to deal with it. But it’s ironically the places where we worry algorithms will be biased that they have the most potential to remove a lot of the biases of humans.
People want to anthropomorphize technologies—especially AI technologies, I think in part because the term includes the word intelligence. People are going to imagine these tools will have their own intelligence, or humanity almost. But ultimately, they’re just tools. So whether in any given context AI promotes equity is simply going to be a consequence of the intentions of the people building these algorithms as well as their knowledge. The science is moving forward, and that means we can make the builders of these tools knowledgeable enough about bias and how to fix it, so that in 10 years what we’ll be left with is intention. It’s not going to be a technological problem; it’s going to be a sociological problem.
Sendhil Mullainathan is the Roman Family University Professor of Computation and Behavioral Science at Chicago Booth.
Economists consider the societal impact of reclassifying marijuana.
Would a New Policy on Pot Be Good for the US?A series of studies suggests an inverse relationship between automation and religiosity.
Where AI Thrives, Religion May StruggleThe Chicago Booth finance professor and Capitalisn’t cohost fields questions on everything from competition policy to his favorite soccer team.
Capitalisn’t: Ask Luigi Zingales AnythingYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.