Algorithms are helping judges assess recidivism risk, but not without some controversy.
- By
- March 01, 2021
- CBR - Public Policy
Algorithms are helping judges assess recidivism risk, but not without some controversy.
In sentencing defendants, judges often factor in how likely a person is to commit another crime in the future. And as technology advances, this assessment sometimes involves algorithms.
Criminal-justice nonprofit Recidiviz’s Julia Dressel and University of California at Berkeley’s Hany Farid studied COMPAS, an algorithmic tool that predicts defendants’ risk of recidivism. Using a database of defendants from Broward County, Florida, the researchers find that COMPAS achieved roughly 65 percent accuracy in its predictions of who would commit another crime. However, a set of human predictors with no criminal-justice expertise was almost as accurate: participants the researchers recruited through Amazon Mechanical Turk averaged 62 percent accuracy. Dressel and Farid further find that an algorithm using only two variables—age and number of past convictions—performed slightly better than COMPAS.
Stanford PhD student Zhiyuan "Jerry" Lin, data scientist Jongbin Jung, and Stanford’s Sharad Goel, with University of California at Berkeley’s Jennifer Skeem, replicated the experiment by Dressel and Farid and find similar results.
However, further experimentation suggests algorithms may still have an advantage over human predictors, or at least nonexperts. In Dressel and Farid’s experiment, human predictors were given feedback after each prediction—whether they were right about the defendant they’d just evaluated, as well as an update on their overall accuracy; when Lin, Jung, Goel, and Skeem didn’t offer predictors this instantaneous feedback, depriving them of the chance to learn from their errors and more closely mimicking the conditions faced by real-life judges, human performance fell well below that of both COMPAS and the researchers’ own statistical model.
Parole decisions are another area where the risk of recidivism is considered, and where algorithms have been called into use. In Pennsylvania, the state parole board began using ML-generated forecasts of future criminal behavior in 2013 to inform decisions about which prisoners to release on parole. Each of these forecasts, which came from a custom-built assessment tool developed by University of Pennsylvania’s Richard Berk, was accompanied by an assessment of the prediction’s reliability, which varied from case to case. Research by Berk finds that recidivism among parolees declined significantly after the board began considering the ML forecasts, particularly when it came to violent crimes. In late 2019, the board ceased using the tool after analysis by the Pennsylvania Department of Corrections raised concerns about possible racial bias in the risk forecasts. Berk says those concerns reflect a misinterpretation of the results, and that the data in fact show that "the risk instrument was implemented properly such that Black and white parolees were rearrested at comparable rates."
When consumers barely trust institutions, banking fines might lead people to withdraw their money.
Are Big Bank Penalties Good or Bad for the Financial System?A Q&A with Chicago Booth’s Constantine Yannelis on policies to address the student-loan crisis.
The Case for Pausing, Not Canceling, Student DebtMonetary policy makers set the stage for inflation but were slow to respond when it appeared.
The Case for and against Central BankersYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.