Loomis Look-Back Previews AI Sentencing Fights To Come

By Natalie Rodriguez | December 9, 2018, 8:02 PM EST

Two high-profile litigators — one bombastic, the other reserved — walked into a New York University School of Law conference hall to take a stab at something the U.S. Supreme Court refused to do: air arguments on whether or not the use of risk assessment technology in criminal sentencing can stack the deck against defendants.

Mark Lanier of The Lanier Law Firm and Steve Susman of Susman Godfrey participated in a mock trial at NYU Law School on Nov. 30. 

Mark Lanier and Steve Susman, name partners at their firms The Lanier Law Firm and Susman Godfrey LLP, participated in the Nov. 30 mock trial that was loosely based on Loomis v. Wisconsin as part of the Center on Civil Justice's conference on artificial intelligence.

Loomis v. Wisconsin was the first but likely not the last major lawsuit questioning a court's use of risk assessment software to decide whether to imprison someone. Such software has become a lightning rod among criminal justice advocates, with some pushing for AI programs to replace the cash bail system and others arguing the algorithms can harbor bias that does more harm than good.

"There is no doubt these issues will play out in courtrooms everywhere," said Arthur Miller, an NYU law school professor who served as the mock trial's judge.

Lanier, who used full-body motions to emphasize his points and at times breathlessly began speaking before even being fully at the lectern, argued for appellant Eric Loomis.

The real-life Loomis challenged a Wisconsin court's use of a computer program to sentence him to six years in prison for driving a car that had been used in a shooting. A lower court's dismissal of his suit against the state, which was closely watched by civil justice advocates, was left standing when the Supreme Court in 2017 declined to accept the case.

Susman, who stepped and spoke in measured form, represented Northpointe Inc., which developed the software in question known as COMPAS, in the mock trial.

Measured by the room's laughter and outbursts of commentary, Lanier had the conference audience, which served as jury, riveted from early on. He spoke plainly of the possible perils that risk assessment tools like COMPAS pose, making a connection to one of the most popular forms of artificial intelligence, Siri.

"If I ask the AI Siri to take a message for my wife, she can't even get that right, and I don't want her determining my sentence, as well as Northpointe, because they don't do it right," Lanier said during his opening argument.

With witnesses playing the roles of scientists for each side, Lanier's questions hit on a key issue for critics of risk assessment technology: the company's propriety stance on the software means it is impossible to know if bias, particularly racial bias, has seeped into the program. In real life, Loomis had requested access to the COMPAS algorithm but had been denied.

During closing arguments, Lanier increased the bombast to hammer on the point that Loomis — who had argued COMPAS cost him two extra years in prison — was not seeking damages beyond $1 but more a statement verdict.

"Please make a statement that as society rushes headlong down the hill of computers and technology and things that, yes, could replace humans right and left. Don't let it replace humans at the core of our democracy, which is justice," Lanier said.

It was a compelling argument made to a room full of lawyers, law students and others interested in civil justice matters.

Lanier did not, however, win his case.

At best, it was a hung jury, with about half of the audience siding with Susman's argument by a show of hands.

Susman argued that Northpointe did nothing wrong in providing a risk assessment tool that many states require courts to use.

While questioning the witnesses, Susman methodically drove home the point that Northpointe does not promote COMPAS to be used in sentencing and acknowledges in its user materials that judges should go with their own reasoning if their judgment conflicts with the software.

"Is there anything in the user guide that says it should be used for sentencing?" Susman asked his witness.

"No, it says the opposite," a Northpointe "official" said.

If there is user error, it is on the part of the judges — not the manufacturer that created the algorithm, Susman argued.

"Like a hunting knife ... used to stab another person, is the manufacturer liable? Of course not," Susman said.

The use of AI software to determine whether someone should be imprisoned was spurred in large part by criticism of the cash bail system, which many argue can harm poorer people more than wealthier counterparts. In November, Dallas became one of the latest municipalities to eschew traditional cash bail systems in favor of an AI program that assesses a person's likelihood of being a danger to society or a flight risk.

But over the last two to three years, a spotlight seems to be growing on how these tools are used — or not used.

U.S. Immigration and Customs Enforcement has recently come under scrutiny for changing its risk assessment software to allegedly now only spit out recommendations to detain. And earlier this year, San Francisco and its juvenile probation department was sued over detaining a minor for several days for a nonviolent offense, even though the risk assessment software suggested he be released.

The conversation after the mock trial highlighted difficulty in trying cases that involve such technology.

While Susman noted that in a civil case Northpointe could be compelled to share its proprietary algorithm under a protective order much the way companies are required to do so in patent cases, Lanier responded that is likely too costly in cases where a convict or accused is challenging the use of risk assessment software.

But issues like the cost, and the defendant's guilt or innocence, shouldn't prevent a debate on whether AI unfairly skews sentences, Lanier suggested during the trial.

"He is properly a convict, and I have the very difficult task of trying to represent someone where the tendency of people is to say, 'Well, it doesn't matter. He's a crook. So he's serving longer than he's supposed to? It will teach him a lesson,'" Lanier said during opening arguments.

"That cannot be our mentality," he added.

--Editing by Brian Baresch.

Have a story idea for Access to Justice? Reach us at accesstojustice@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!