As life grows increasingly digital, Americans rely on algorithms in their daily decision-making—to pick their music on Pandora, guide their program choices on Netflix, and even suggest people to date on Tinder. But these computerized step-by-step processes also determine how self-driving cars navigate, who qualifies for an insurance policy or a mortgage, and when and even if someone will get parole. So where should we draw the line on automated decision-making?
For most people, it’s when the choices turn “morally relevant,” according to Chicago Booth’s Berkeley J. Dietvorst and Daniel Bartels. Decisions that “entail potential harm and/or the limitation of one or more persons’ resources, freedoms, or rights” often result in the need to make trade-offs, something that we think humans, not algorithms, are better equipped to do, their findings suggest.
Most people expect algorithms to make recommendations on the basis of maximizing some specific outcome, and many people are fine with that in amoral domains, according to the researchers. For example, more than 1 billion people trust Google Maps to get them where they’re going. But when the issue of moral relevance intrudes, people start to object, the researchers’ experiments demonstrate.
“I suspect people feel they don’t want to toss out human discretion in matters of right and wrong, in part because they likely feel they ‘know it when they see it,’” Bartels says. Moral questions are complicated, he says, especially because one person’s morals won’t always align with another’s.
To test this theory, Dietvorst and Bartels gave 700 study participants the scenario of a health-insurance company considering using an algorithm to make certain decisions about customers’ plans. After reading about which choices the company would outsource to an algorithm, participants were asked if they, as customers of the company, would switch to a different insurer because of the algorithm. The researchers find a strong correlation between how morally relevant participants found the choices to be and their intention to switch insurance companies.
From another study, which involved participants choosing whether they’d want a human or an algorithm to make various decisions, the researchers learn that people dislike algorithms in moral situations because they focus only on outcomes, not the moral trade-offs involved. People expect human decision makers to take into consideration values such as fairness and honesty and to make a judgment for each specific case.