Pretrial Risk Assessment Is Biased And Indefensible

By Jeffrey Clayton | August 30, 2020, 8:02 PM EDT

Jeff Clayton
The Conference of Chief Justices, an organization of the highest-ranking judges and justices in the nation, sits at the crux of a major fight that looms regarding the continuing use of pretrial risk assessment tools in our criminal justice system. The organization's stated policy supporting its use is perplexing and has become increasingly difficult to defend.

Evidence is mounting that these algorithms actually harm defendants, even as chief justices in more than 20 states aggressively advocate for their adoption. Many county jurisdictions in nearly every state already utilize risk assessment in some form or another. On the federal level, it has been nearly fully implemented in every district across the country.

In the early 2010s, the group adopted a policy in support of the use of algorithms that were designed to make decisions as to the release or detention of defendants. They also functioned to determine appropriate bail and conditions of release, which included supervision by pretrial service agencies or private entities.

These computational factors have had a huge presence, telling police where to go, prosecutors when to divert, and judges how to sentence defendants and set bail and release conditions. They have also directed parole and probation officers on how to supervise defendants and when to grant early termination.

At the time, it was believed that the algorithms would reduce racial and other bias by making the system more objective, in addition to significantly reducing generational mass incarceration. According to the Conference of Chief Justices, the goal was to support "evidence-based risk assessment" in all bail decisions throughout the United States.

However, by the midpoint of the decade, algorithms began to face greater scrutiny when it became apparent that they were based on factors beyond the control of defendants, but for which these individuals were facing severe consequences.

Former U.S. Attorney General Eric Holder was the first to raise alarm, questioning the specific use of demographic factors that correlated with race and poverty. Holder was concerned that certain characteristics of a defendant, such as their education level, socioeconomic background or even the neighborhood from which they were raised might "exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society."

Some algorithms register scores for things like whether one owns or rents their housing, if one has a home phone, or the length of time one has been living at an address. Others consider prior mental health treatment, or rehabilitation for alcohol or substance use disorders. This is problematic because of the possibility it violates the Americans with Disabilities Act and other state companion statutes.

Taken to its extreme, one possible methodology would simply involve taking a ZIP code map of an area where a crime has taken place, then notating all people from that area as "more risky" compared to those residing outside of that area. Outrageous as it might sound, this is exactly how the algorithms work — determining if people in one particular area are more at risk of failing to appear in court or committing a new crime, in comparison to the baseline.

Holder's concern was just that. The algorithms in a sub rosa fashion provide a technological smoke screen in lieu of applying objective factors. They label all people who are from high-crime, segregated neighborhoods as being at higher risk of failing to appear in court or committing a new crime. His concerns were real and well-founded.

A huge wave of interest in algorithms was triggered in 2016 when ProPublica offered criticism of a much-used risk assessment tool called the Correctional Offender Management Profiling for Alternative Sanctions. The report called into question the very fairness the tool was purported to deliver, and it was later discovered that the same tool was in use in Los Angeles County.

Academia took note and initiated a second wave of studies. The conclusions almost universally criticized the algorithms from many perspectives, declaring them ineffectual, biased, lacking transparency and harmful to due process rights. 

On the heels of that development, interest groups spanning the political spectrum called for transparency of the algorithms. As evidence mounted against the tools, they later went further with their demands, ultimately calling for an end to the use of pretrial risk assessment altogether. 

The Pretrial Justice Institute was among the many groups that had previously supported the algorithms. The organization had long focused policy solutions solely through the lens of racial bias, but was prompted to publicly condemn its previous support of risk assessments earlier this year when the facts could no longer be denied.

Risk assessments were once heavily promoted as the singular solution to replace money bail. Mounting evidence to the contrary aside, this old mantra has been difficult for proponents to discard. 

California's system, which will be going before voters in November, is among those for which the results of an algorithm drive all decision points down the line, i.e., low-risk defendants get out of jail free, medium-risk defendants get out with supervision by a local entity, and high-risk individuals are left to sit in jail pending trial. Senate Bill 10 would create a no-money bail system similar to the federal system in the state courts, complete with risk assessment and preventative detention. If passed, the bill would not affect federal courts in California.

In light of the heavy criticism of racial bias, defenders of risk assessments have pivoted to a new point: Judges can simply ignore them at their discretion. They have also invoked the mantra that judges, not the algorithm, make the decision.

However, this position flies directly in the face of the proponents' argumentum ad verecundiam. It was the fact that algorithms were evidence-based and scientifically validated that made them worth considering in the first place. Theoretically, it was the promise of science, not judicial discretion, that made the system unprejudiced.

If a judge may simply regard or disregard at their discretion, then an algorithm ceases to be scientific. In other words, advocates cannot have it both ways. To admit, for example, as the Colorado Commission on Criminal and Juvenile Justice just did, that risk assessment should be a discretionary consideration when available, is to concede they are not scientific. It is also an admission, beyond doubt, that risk assessments will never be capable of replacing money bail.

For the last three years, my organization has cautioned the Conference of Chief Justices about continuing to use the power of the judiciary to advocate for these policies within legislative and policymaking forums. Recently, we sent another letter urging that the group abandon its official position and revisit it immediately, since risk assessments continue to be used actively by chief justices and advocates in an estimated 20-plus states.

We received just one response to these concerns, an email from Chief Justice Maureen O'Connor of Ohio, pointing out a typographical error and nothing more. As it pertains to merits, the Conference of Chief Justices has not defended its policies, nor answered any charges that its position is misplaced or misinformed. It has become clear that as an entity, it is entrenched on this issue.

While the Conference of Chief Justices has continued to wager that "evidence-based risk assessment" will win the day, this no longer appears to be an accurate forecast of the ultimate outcome. For the sake of truth and justice, advocates must renew pressure on the organization, and the judiciary in general, to remove themselves from the risk assessment debate entirely, be it for pretrial or any other aspect of the criminal justice system.

Risk assessment should be treated as scientific evidence. Accordingly, opposing parties must have the ability to challenge the results thereof. Most would have a difficult time believing that any instrument based on group data, with a predictive accuracy rate ranging from 66% to 72% and which does not purport to predict human behavior directly, would meet these criteria.

The Conference of Chief Justices has adopted a policy that has led to the wholesale use of these risk assessment tools, which is likely increasing incarceration. But despite the credibility that is associated historically with the group, its cachet no longer appears to be convincing advocates of criminal justice reform.



Jeffrey J. Clayton is the executive director of the American Bail Coalition.

"Perspectives" is a regular feature written by guest authors on access to justice issues. To pitch article ideas, email expertanalysis@law360.com.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!