Algorithms are helping judges assess recidivism risk, but not without some controversy.
- By
- March 01, 2021
- CBR - Public Policy
Algorithms are helping judges assess recidivism risk, but not without some controversy.
In sentencing defendants, judges often factor in how likely a person is to commit another crime in the future. And as technology advances, this assessment sometimes involves algorithms.
Criminal-justice nonprofit Recidiviz’s Julia Dressel and University of California at Berkeley’s Hany Farid studied COMPAS, an algorithmic tool that predicts defendants’ risk of recidivism. Using a database of defendants from Broward County, Florida, the researchers find that COMPAS achieved roughly 65 percent accuracy in its predictions of who would commit another crime. However, a set of human predictors with no criminal-justice expertise was almost as accurate: participants the researchers recruited through Amazon Mechanical Turk averaged 62 percent accuracy. Dressel and Farid further find that an algorithm using only two variables—age and number of past convictions—performed slightly better than COMPAS.
Stanford PhD student Zhiyuan "Jerry" Lin, data scientist Jongbin Jung, and Stanford’s Sharad Goel, with University of California at Berkeley’s Jennifer Skeem, replicated the experiment by Dressel and Farid and find similar results.
However, further experimentation suggests algorithms may still have an advantage over human predictors, or at least nonexperts. In Dressel and Farid’s experiment, human predictors were given feedback after each prediction—whether they were right about the defendant they’d just evaluated, as well as an update on their overall accuracy; when Lin, Jung, Goel, and Skeem didn’t offer predictors this instantaneous feedback, depriving them of the chance to learn from their errors and more closely mimicking the conditions faced by real-life judges, human performance fell well below that of both COMPAS and the researchers’ own statistical model.
Parole decisions are another area where the risk of recidivism is considered, and where algorithms have been called into use. In Pennsylvania, the state parole board began using ML-generated forecasts of future criminal behavior in 2013 to inform decisions about which prisoners to release on parole. Each of these forecasts, which came from a custom-built assessment tool developed by University of Pennsylvania’s Richard Berk, was accompanied by an assessment of the prediction’s reliability, which varied from case to case. Research by Berk finds that recidivism among parolees declined significantly after the board began considering the ML forecasts, particularly when it came to violent crimes. In late 2019, the board ceased using the tool after analysis by the Pennsylvania Department of Corrections raised concerns about possible racial bias in the risk forecasts. Berk says those concerns reflect a misinterpretation of the results, and that the data in fact show that "the risk instrument was implemented properly such that Black and white parolees were rearrested at comparable rates."
How researchers analyzed the impact of various transport policies on Chicago commuters.
The Equation: How to Improve a City CommuteResearch finds China’s digital coupon programs were a cost-effective way to boost spending.
Why China’s Pandemic Stimulus Worked Better Than the US’sRegular, continuous care is good for patients, and could also benefit providers and insurers.
How Proactive Healthcare Can Save on CostsYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.