Risk Assessment Tools Are Not A Failed 'Minority Report'

By Sarah Desmarais, Brandon Garrett and Cynthia Rudin | July 19, 2019, 5:50 PM EDT

Sarah Desmarais
Brandon Garrett
Cynthia Rudin
"Minority Report," commented one judge. "I also don't go to psychics," said another. In surveys on the use of risk assessment in criminal cases, some judges have expressed discomfort with using the tools currently available. Yet few have suggested another way out of the problem of mass pretrial incarceration. Each day, our jails hold at least half a million people awaiting trial. Many are not there because they pose a risk to the community, but because they simply can't afford the high cash bail often set quickly by judges and magistrates without meaningful input from defense.

Given the terrible harms of business as usual in our courtrooms and jails, we believe that getting risk assessment right is worth the investment in research and resources. That is why we think it is worth responding to Wednesday's op-ed in the New York Times, which argues that risk assessment tools make future violence seem more predictable than it is — even referring to it as a failed "Minority Report" scenario (referring to the novel and 2002 film where murderers can be arrested before they commit their crimes). The authors rely on an underlying statement that, while signed by a number of academics whom we respect, contains inaccuracies about both the weaknesses and strengths of pretrial risk assessment tools and their use in the criminal justice system.

One of these inaccuracies is the claim that many risk assessment tools rely on arrest records, which can be racially biased and provide inadequate information, when in fact most tools — including the most widely used tool, the Public Safety Assessment, or PSA — do not. The authors also criticize nontransparent risk assessment tools, and yet the PSA is not a black box, and nor are most widely used pretrial risk assessment tools. In fact, most pretrial risk assessment tools actually do publish their items, rating guidelines, and algorithms online or in their manuals, and do not rely on proprietary technology. The contents, scoring criteria and algorithm for the PSA, for example, are readily available online.

Further, many pretrial risk assessment tools, including the PSA, do disentangle risk for flight and danger to public safety. While most validation studies measure pretrial criminal activity by looking at new arrests, this is not a problem inherent in the tools but rather in how the tools are being studied. Instead of throwing out the tools, a reasonable solution would be to conduct research on their ability to predict other indicators of pretrial criminal activity. 

The author's claim that risk assessments do not accurately measure pretrial risks misses the fact that these tools estimate relative risk, not absolute risk. Uncertainty is wide when predicting crime over short time intervals, but smaller when predicting over larger intervals. The latter risks are directly related to the former, allowing tools to more confidently assess which individuals are higher risk than others. That is why risk assessment tools are evaluated by how well they can rank individuals from high to low risk. While there are technical challenges, it is extreme to claim that no remedy exists, and to insist that we make decisions without using data and statistics. In fact, solutions actually do exist.

The authors also criticize the use of criminal history data in pretrial risk assessment. Indeed, the PSA and other tools do consider prior convictions. However, that is exactly the kind of information that judges consider when they don't have the benefit of these tools. One would be hard-pressed to find a set of bail and sentencing guidelines that does not include criminal history as a major component in a person's pretrial release determination or sentence.

Regarding the claim of racial bias in risk assessment tools, we would point out that a defendant's race is quite evident in the courtroom, making the defendant subject to any racial biases a judge might hold. Racial and ethnic disparities are well-documented in studies of our jail population.

A judge often sets cash bail, which punishes the poor, and may allow wealthy criminal defendants to walk free. Judges and magistrates make these calls very, very quickly, often relying almost exclusively upon criminal and charges records, without meaningful input from defense lawyers. The result fills our jails in ways that can harm public safety. The racial and ethnic disparities are well-documented.

To call risk assessment fundamentally flawed suggests that we should abandon reforms and keep things the way they are. Instead, we need to give judges better information. No human being is an expert predictor. Relying on empirical data is far superior to going with one's gut, if it is the right data, carefully analyzed, and presented in such a way as to minimize bias. In fact, statistical tools can be specially designed to help reduce the biases that are — obviously — inherent in the data.

We also need investment and resources in the community, like the authors do say. Indeed, there is evidence[1] that judges are more comfortable releasing low-risk individuals if there are treatment resources available — although many defendants may not even need treatment or other pretrial conditions. The real problem may not be risk assessment, but rather countering judges deep-seated reluctance to release.

That is why the American Law Institute has prominently, in its recent revision of the Model Penal Code, endorsed the use of risk assessment tools if used correctly, and to divert people to shorter prison terms or to the community. That is why we do need community-based alternatives to jail. But that is also why we, as researchers, should focus on improving risk assessment and not lodging inaccurate claims about its contents or use.

The critics of risk assessment themselves require a dissenting opinion: a minority report. While risk assessment tools may not eliminate racial, ethnic or other biases, there is no evidence that they exacerbate them either. Risk assessment tools and the promise they hold to improve on judges' and magistrates' current decision-making processes should not be dismissed simply because they aren't yet perfect.



Sarah Desmarais is a professor of psychology and director of the Center for Family and Community Engagement at North Carolina State University. She has researched and developed criminal justice and violence risk assessment tools.

Brandon Garrett is the L. Neil Williams professor of law at Duke University School of Law. His most recent book is "End of its Rope: How Killing the Death Penalty Can Revive Criminal Justice."

Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University. Her lab develops machine learning models.

"Perspectives" is a regular feature written by guest authors on access to justice issues. To pitch article ideas, email expertanalysis@law360.com.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the organizations, or Portfolio Media Inc., or any of their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.


[1] https://journals.sagepub.com/doi/abs/10.1177/0093854819842589

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!