Why We Prefer Human Judgment to Algorithms
People hold algorithms to a higher standard than they do humans.
Why We Prefer Human Judgment to AlgorithmsAlgorithms now are involved in all kinds of decisions, such as hiring, lending, university admissions, and criminal justice. But how concerned should we be with the possibility that algorithms may discriminate against particular groups, and how can we know whether illegal discrimination has occurred?
High-profile incidents, such as a ProPublica report from May 2016 that software used to predict future criminals appeared to be biased against African Americans, have raised concerns that employing algorithms to make consequential decisions about people’s lives will either replicate historical patterns of human discrimination or establish new biases against groups that are legally protected, such as racial minorities and women. But Cornell’s Jon Kleinberg, University of Chicago Harris School of Public Policy’s Jens Ludwig, Chicago Booth’s Sendhil Mullainathan, and Harvard’s Cass R. Sunstein argue that under the right circumstances, algorithms can be more transparent than human decision-making, and even can be used to develop a more equitable society.
Human bias, whether explicit or implicit, frequently affects important decisions. For example, in a 2004 study, Booth’s Marianne Bertrand and Mullainathan sent out résumés to employers that were identical in all respects, except that half the résumés had a name that sounded white, and the other half had a name that sounded African American. Résumés with white-sounding names received 50 percent more callbacks for an interview.
Such bias can be hard to root out because when humans make decisions such as whom to hire, they frequently rely on automatic and opaque thought processes. “Much of the time, human cognition is the ultimate ‘black box,’ even to ourselves,” write Kleinberg, Ludwig, Mullainathan, and Sunstein.
In contrast to human thought processes, certain elements of algorithmic decision-making are inherently explicit.
Algorithms, for all of their promise, can also lead to biased outcomes. The possibility of discrimination is complicated by algorithms’ mathematical complexity: the computation that turns input (data on, say, a job applicant’s background characteristics) into output (for instance, a prediction of some event’s likelihood such as whether the applicant turns out to be a productive worker on the job) can be difficult to trace, which has led to the characterization of algorithms as inscrutable black boxes.
But with the right regulations in place, algorithms could be more transparent than human cognition, the researchers argue. And the transparency could make it easier to detect and prevent discrimination.
In contrast to human thought processes, certain elements of algorithmic decision-making—such as the inputs used to make predictions and the outcomes algorithms are designed to estimate—are inherently explicit, though not always publicly visible. The researchers suggest that regulators could require companies to store and make available for legal purposes the choice of outcome, candidate predictors, and training sample that an algorithm uses. If the algorithm were subsequently suspected of producing discriminatory results—intentionally or inadvertently—it would be possible for authorities to review this information and achieve a level of insight into the algorithm’s function that’s unobtainable in human decision-making.
People hold algorithms to a higher standard than they do humans.
Why We Prefer Human Judgment to AlgorithmsKleinberg, Ludwig, Mullainathan, and Sunstein acknowledge that there will be pushback: maintaining such large data sets for examination can be expensive, and companies will be reluctant to reveal proprietary and competitive information about how their algorithms work. But mandating this transparency may be necessary for people to trust algorithms to improve on human decision-making.
If such regulations can be implemented, the researchers argue, then rather than opening the door to further discrimination, algorithms can help prevent it by making it easier to detect. That could in turn make it possible to apply the power of algorithms to more decisions in which humans’ poor predictive abilities tend to work against certain populations. If well-designed algorithms can make more-accurate predictions than humans can, Kleinberg and his coauthors write, the benefits will disproportionately accrue to groups that historically have been harmed the most by discrimination.
Chicago Booth’s Matthew J. Notowidigdo discusses what it would take for the US health system to produce better outcomes.
How Can We Fix US Healthcare?Ralph Nader joins the podcast to discuss ethically profitable business, the political disillusionment of the public, and the state of capitalism today.
Capitalisn’t: Ralph Nader’s CapitalismA study compares learning new skills in a classroom setting with on-the-job training.
What’s the Best Way to Retrain Jobless Workers?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.