ACLU Atty On How To Protect Civil Liberties In The AI Era

By Hannah Albarazi | February 2, 2024, 8:07 PM EST ·

Because artificial intelligence and algorithmic systems often operate in the shadows, there's a new need for legislation, regulation and enforcement to ensure the technology doesn't undercut civil liberties by engaging in discrimination in housing, education or employment, according to Cody Venzke, senior policy counsel for the American Civil Liberties Union.

smiling man in suit

Cody Venzke

Working on issues of surveillance, privacy and technology, Venzke applies his training as a privacy lawyer toward ensuring everyone from students to job seekers aren't having their civil rights and civil liberties infringed upon by algorithmic systems and artificial intelligence tools in either the public or private sectors.

But Venzke said this isn't just hypothetical, stressing that already marginalized groups have been seeing discrimination by algorithmic systems that can determine whether they will have access to certain housing or job opportunities.

Regulating how people's data can be used is also a high priority right now, Venzke said.

"When there are state-level attacks on vulnerable groups of people, it means that algorithmic systems and the use of our data can make them even more vulnerable," he said. "We've seen this, for example, in attacks on reproductive rights, where the lack of comprehensive privacy legislation and certain loopholes in existing privacy protections have allowed law enforcement to pursue a digital trail of data." 

Venzke spoke with Law360 about what safeguards he thinks are most needed to protect people — including marginalized groups who are already seeing harm — from discriminatory algorithmic systems and AI tools. This interview has been edited for length and clarity.

Are there AI-oriented government policies that the ACLU is concerned about?

President [Joe] Biden's executive order [Safe, Secure, and Trustworthy Artificial Intelligence] enshrined a lot of the principles that we have been advocating for [regarding] uses of artificial intelligence, including auditing and identifying potentially discriminatory uses of AI and then mitigating those discriminatory harms. Seeing civil rights centered in the administration's AI policy is a major win for us.

(iStock.com/CoreDesignKEY)

AI & The Law

In a series of Q&A's with attorneys who are focused on artificial intelligence, Law360 examines the challenges and opportunities that AI presents to law firms, their clients and society at large.

More in this series

One of the things that we are looking forward to over the course of the next year or so is ensuring that those policy principles are enshrined in agencies' actual practices. We think that is a good building block to begin working from. There's additional work to be done, including addressing AI uses in the private sector.

What are you looking out for in the private sector's usage of AI?

I think that what we would be looking for is to see many of the principles that were in the artificial intelligence executive order and in the administration's blueprint for an AI Bill of Rights be extended to the private sector. That means ensuring that algorithmic systems aren't resulting in discriminatory harm. That means mitigating those discriminatory harms and providing people with really meaningful recourse if they've been harmed by artificial intelligence. For example, this means that you would be aware of the fact that your job application is being processed and assessed by artificial intelligence, getting notice of that assessment and the decision that's made, and getting an opportunity to either challenge that decision or correct any incorrect data that it relied on.

Are there regulations that you or the ACLU are pushing for in the year ahead?

We've been championing more detailed guidance from the Equal Employment Opportunity Commission to help ensure that both employers and the companies that develop and sell hiring tools, understand that civil rights law applies to them even when the hiring decisions are made by artificial intelligence.

We released a report earlier this year on high-tech surveillance in the education space, including monitoring kids' online activity, the use of facial recognition in schools and similar surveillance technology. We are looking forward to action from the U.S. Department of Education, including guidance for schools on how civil rights law intersects with artificial intelligence and how the [Family Educational Rights and Privacy Act] applies to artificial intelligence.

One of the places where the executive order, we think, fell short is in national security and adjacent fields such as domestic law enforcement and immigration. National security and immigration uses of artificial intelligence are some of the most impactful places where AI can affect individuals' rights and individuals' liberties. Those spaces were largely left untouched by the executive order — not entirely — but they're largely subject to a future, yet-to-be-drafted memorandum on AI in the national security space.

What kind of challenges are people coming to the ACLU with regarding AI policies?

One of the biggest ones we're seeing on the litigation side is the use of AI and law enforcement, particularly the use of facial recognition technology, which has resulted in disparate, incorrect arrests of Black people when the facial recognition technology wrongly identified them as leads in investigations and that was simply used to then make an arrest.

The executive order requires law enforcement agencies to really assess the way that algorithmic systems are used throughout the criminal legal system. So that includes not just the use of facial recognition technology for identifying leads in investigations, but other algorithmic systems that make decisions about people. For example, some algorithmic systems are used to determine the terms of parole — which individuals might pose a risk to the community. These systems are ones where we would love to see increased auditing of potential discriminatory impacts from those systems and mitigation of any discriminatory impacts, because of the significant impact those systems can have on individuals.

How might something like that be mitigated?

Well, one of the ways is ceasing to use the system if you can't address discriminatory impacts that it's having on people. Beyond that, ways that algorithmic technology can have the harms mitigated include by examining the data that's used to train the system. Often, what we see is the data that's used to train an algorithmic system, or that's fed into it to make decisions about individuals, reflects existing societal biases against people of color, people with disabilities and other vulnerable groups. In addition, providing certain procedural safeguards — like providing notice to the individual, providing an opportunity to challenge the algorithmic system and providing the opportunity to correct information — are ways that you can help mitigate those discriminatory uses.

One final way, I think, that's really essential is that as entities consider whether to deploy an algorithmic system, or are assessing algorithmic systems they've already deployed, they consult with a wide array of stakeholders, especially those that are most likely to be impacted by the system. They might be able to provide insight observations about the system's use and its potential impacts that might otherwise be missed.

What do you think needs to be cleared up when it comes to crafting AI policies?

One of the key things that I think policymakers need to ensure that they are grappling with as they think about AI is addressing algorithmic systems and AI systems that are already in place and already affecting people's lives. I think that generative AI, like ChatGPT, is sort of grabbing lots of headlines, and that means a lot of the proposals that we are seeing are focused on things like generative AI and deepfakes. And although those are probably worthy of legislative attention, that leaves lots of algorithmic systems that are making decisions in education, in governmental benefits and hiring unaddressed. For example, 99% of Fortune 500 companies are using algorithmic systems to make hiring decisions about people, where an artificial intelligence system will score resumes and advance the highest scoring resumes on to the next round. Studies have shown that these algorithmic hiring circumstances can lead to discriminatory effects where they will favor employees that have already been favored by existing biases in society.

Is there any AI regulation coming in 2024 that you're expecting?

The U.S. Department of Health and Human Services is required to develop a strategic plan on the use of algorithmic systems in governmental benefits. I think it's going to be a critical step. The ACLU has litigated against the use of algorithmic systems in various Medicaid programs, which are administered by state agents, and in some of those circumstances what we've seen is that state agency employees develop the algorithm to determine people's benefits with almost no vetting, no grounding in statistical measuring and no notice or recourse for affected individuals.

One of the major things that underlies the use of algorithmic systems is our data, and we're seeing lots of agencies respond favorably to regulating, as much as they can, the uses of our data. The Consumer Financial Protection Bureau is preparing a proposed rule on regulating data brokers under the Fair Credit Reporting Act, the Federal Trade Commission has had long-simmering rulemaking on commercial surveillance, and the Department of Education has long indicated that they are working on an update to rules under the Family Educational Rights and Privacy Act. So I think all of those would be really meaningful protections for people to control their data amid the increased prevalence of AI in those sectors.

Why is it important to you and to the ACLU to get AI policy right?

Frankly, there are many critical areas of our lives where we have long and rightfully been protected by civil rights laws and by procedural protections to ensure that entities aren't making sort of arbitrary decisions about our access to housing, to education, to employment and other critical opportunities. The advent of AI should not change that at all. Unfortunately, AI is often functioning in the shadows. We might be unaware of its use. We might be unaware of how it came to those decisions about us. Legislation, regulation and enforcement are critical for ensuring that AI doesn't undercut those long-standing protections.

--Editing by Alanna Weissman.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!