Analysis

This article has been saved to your Favorites!

'Everyone Ignores' New York City's Workplace AI Law

By Amanda Ottaway · 2024-03-01 20:09:51 -0500 ·

New York City became the first U.S. jurisdiction to attempt to regulate the use of artificial intelligence in employment decisions last year, when it started enforcing a statute that requires employers to audit the AI tools they use for bias. So far, the law has proved to be a toothless flop, management-side experts say.

stock image of a man sitting in front of a virtual screen that says AI

Experts predict it won't be long before newer, more comprehensive legislation takes over New York City's AI law. (iStock)

The embattled Local Law 144 took effect in January 2023 and saw enforcement begin that July after multiple delays. It requires employers that use automated employment decision tools to audit them for potential discrimination, make public the results of those audits, and alert workers and job applicants that such tools are being used.

The ability of employers to freely interpret the law's scope for themselves, as well as a dearth of enforcement mechanisms and the lack of a so-called safe harbor for employers that do comply with the bias audit posting requirement, have contributed to make Local Law 144 an ineffective piece of legislation, experts say.

One of the most obvious flaws in the New York City law, said Reid Skibell, a partner at Glenn Agre Bergman & Fuentes, is that it leaves room for employers to essentially decide to disregard it.

"You can sort of unilaterally opt out of it by deciding that the law doesn't apply to you," he said. "I think it's evident having a law that everyone ignores means we don't have a real debate about what's the right way to do this."

A Cornell University study published in January found that most Big Apple employers have opted out of complying. Of the 391 employers the study surveyed, just 18 posted the required audit reports to their websites. Further, a mere 13 followed the directive to publish notices alerting applicants that an automated employment decision tool was being used to evaluate them.

And the New York City Department of Consumer and Worker Protection, which oversees enforcement but does not have the power to launch investigations on its own, hasn't had much to do: According to a spokesperson, since enforcement began July 5, the department has not received a single complaint.

Here, Law360 talks to management-side experts about some of the biggest flaws they see in the Big Apple's law, and a wave of pending legislation they're watching that may overtake it.

What Went Wrong With Local Law 144

Multiple experts pointed to the Cornell study as proof that New York City's law is fatally flawed.

"The Cornell study is a pretty big indictment of the law," Skibell said. "I think it's pretty clear that it's not doing what it was intended to do — either in a good or bad sense. It's sort of a nullity."

Management-side employment lawyers said the law allows employers to opt out if they have a human in the chain of decision making for which they use the tool. Bradford Newman, a Baker McKenzie partner, said that's generally the case anyway. 

Newman called the law an "overly hyped-up piece of regulation that isn't well drafted" and "doesn't do what its intended purpose states it's supposed to do."

Putting the onus on individual job seekers and employees to enforce the law by lodging complaints with a city agency is likely part of the problem. Job applicants whose resumes may be evaluated by an AI tool, for example, would have no way of knowing that was the case if an employer didn't comply with the law by telling applicants that it used such tools.

While whistleblower employees who believe they might have faced AI bias could also conceivably complain, none have so far.

"The lack of any complaint whatsoever — given the amount of public comment and discussion around the law itself — suggests that it's just not working," Skibell said.

While AI fairness advocates urge transparency, or opening an AI tool's "black box," as a way of making its use more fair across the board, Local Law 144's particular transparency requirement also isn't working, experts said. It requires employers that want to use AI tools to have an independent bias audit conducted and post the results of that audit on their websites.

But if those public results do show proof of disparate impact, they could open up employers to more liability than they bargained for, experts pointed out, raising concerns about the plaintiffs bar using the data in discrimination lawsuits.

Newman said he has clients who have decided altogether that it's not worth it to use AI tools in the Big Apple.

"So that's anti-competitive," he said. "Now we're discouraging the use of tools that — while they can inadvertently discriminate and should not — probably have legitimate business use cases which are being passed over."

The New York City law also only protects against race- and gender-based discrimination, experts noted. That leaves a yawning gap for other protected groups particularly at risk for AI bias, like older workers and those with disabilities.

What Other Jurisdictions Are Trying

Littler Mendelson PC shareholder Bradford Kelley called the Big Apple's law a "bad template" for other jurisdictions. While one could point to the New York City law as declaring that AI bias is taken seriously, he said, "that's not the purpose of a law. That's the purpose of a statement." 

Experts predicted that it won't be long before newer, more comprehensive legislation takes over, but they disagreed on whether those new laws would or should come from Congress or a local level.

"There's a wave carrying us forward towards enforcement here that I think will supplant this law," Skibell said.

"And while I think [New York City] was noble to go first, I just think that someone else is going to get there next," Skibell said.

Rachel See, senior counsel at Seyfarth Shaw LLP, said she's watching what she called a "very busy legislative season" at the state level.

"Certainly seeing the approach that state legislators have taken or are proposing to take, that's going in different directions than Local Law 144," she said. See pointed to one proposal in the New York State Legislature that, if passed, would effectively swallow the city law.

New York Senate bill S. 5641A, introduced by state Sen. Leroy Comrie, D-Queens, would address bias in AI employment decision tools and specifically points to enforcement, giving the state attorney general the power to launch an investigation into either a developer of a tool or an employer using it. The bill is currently in committee in the state Senate.

Meanwhile, in Connecticut, legislators have introduced a sweeping bill, S.B. 2, which would broadly address AI issues across the board, from elections to AI-created images.

And in California, A.B. 2390 would bar employers from using AI tools that lead to discrimination and alert workers when they're being subjected to an AI tool, as well as provide a private cause of action for individuals to sue over violations.

"I think that many of these legislative proposals are focusing on transparency and focusing on disclosures from both developers and deployers" of AI technology, See said. "And that that includes disclosures to the public."

Baker McKenzie's Newman is an advocate for federal legislation on AI, and testified in October 2023 before the U.S. Senate Committee on Health, Education, Labor and Pensions in a hearing called "AI and the Future of Work."

"Until Congress acts, you're going to get all of these states and municipalities jockeying for position to regulate — firstly in employment, because the voters can understand that," he told Law360.

He thinks the Biden administration's executive order on AI last October might have lit a fire under Congress.

"Congress, I believe rightfully, views legislating AI as their purview," he said. "And I think they will. I am optimistic that Congress will get something done within the next 24 months," likely related to employment, Newman added.

Newman said he doesn't believe that existing laws, such as Title VII of the Civil Rights Act of 1964, are enough to stay on top of AI.

Littler's Kelley disagreed, saying he's not convinced any more laws are needed.

"Part of the problem with artificial intelligence, in my view, is that I haven't seen a compelling argument why the existing legal framework isn't sufficient. If you're using AI in a discriminatory way, then the existing law accounts for that," he said.

"That's the problem with the New York law," Kelley said. "We can't have laws that are just useless."

--Editing by Abbie Sarfo.

Correction: A previous version of this article referred to California's pending AI bill by an incorrect number. The error has been corrected. 

For a reprint of this article, please contact reprints@law360.com.