This article has been saved to your Favorites!

Expect The Patchwork Of AI Regulation To Grow

By Emily Loeb, Caroline Cease and Benjamin Hand · 2023-04-26 17:35:09 -0400 ·

Emily Loeb
Emily Loeb
Caroline Cease
Caroline Cease
Benjamin Hand
Benjamin Hand
With the public release of Open AI's ChatGPT in November 2022 and Google's Bard in March, governments across the world have a renewed focus on regulating the use of artificial intelligence.

Within the U.S., a patchwork of state regulation concerning AI has emerged — a trend that is likely to continue in 2023 given the reality that there is unlikely to be meaningful federal AI legislation in the 118th Congress.

In 2022 alone, bills or resolutions concerning AI have been introduced in 17 states[1] and Washington, D.C., and were enacted in states including Vermont, Colorado, Illinois and Washington.

State legislatures in Colorado[2] and Illinois[3] created bodies to study AI.

The Illinois Legislature[4] amended its Artificial Intelligence Video Interview Act with additional requirements to collect demographic data when AI analysis is used to make hiring decisions through video interviews.

Washington[5] provided funding for the state's chief information officer to study how automated decision-making systems could be reviewed for fairness and equity.

Vermont[6] established a Division of Artificial Intelligence within the Agency of Digital Services to implement recommendations of the Artificial Intelligence Task Force and require the Agency of Digital Services to conduct a survey of all automated decision-making systems that are being developed, used or procured by the state. Presumably substantive recommendations will follow and, in some form, be implemented.

Proposed state AI regulation continues to percolate in 2023 in response to emerging use cases for AI and headlines documenting troubling incidents with nascent AI products or services.

One approach taken by state legislators has simply been to push for additional transparency. For example, Pennsylvania's H.B. 49,[7] introduced in the Pennsylvania Legislature in March, would require that anyone running a business operating an AI system in Pennsylvania register with the Pennsylvania Department of State.

The bill would require the business to provide information such as a contact person for the business, the type of code being utilized for the operation of the AI system and the intent of the software being utilized by the business.

Other state legislation is focused on the potential for civil rights abuses or discrimination caused by AI systems.

The California Legislature, for example, is considering A.B. 331,[8] which would impose several new requirements on developers and deployers of automated decision-making tools in California, including local government agencies.

Among other things, if a deployer uses an automated decision tool to make a consequential decision affecting an individual, the bill would require the deployer to alert the individual that the tool is being used and provide a statement of the tool's purpose.

If the decision is made solely using the tool, then, if technically feasible, the person must be allowed to opt to have the decision made by an alternative selection process.

In addition, the bill would prohibit a deployer from using an automated decision tool in a manner that contributes to algorithmic discrimination, including with respect to race, sex, religion, age, genetic information, reproductive health, and other factors or characteristics.

The bill contains provisions for both administrative enforcement actions and a private right of action. And the bill is not targeting merely hypothetical uses of AI; a February 2022 survey[9] from the Society of Human Resources Management found that 79% of employers use AI or automation for recruitment and hiring purposes.

Companies should expect state legislatures across the country to consider not only general AI legislation, but also industry-specific regulation, addressed specifically to the use of AI in areas like health care, hiring, lending and other highly regulated spaces that are likely to involve the risk of algorithmic discrimination or bias.

As a patchwork of state-by-state regulation emerges, there are at least two main risks. First, a growing body of state regulation may slow innovation — for better or worse — as the technology continues to grow. Second, a patchwork of state-by-state regulation may ultimately be unworkable for companies or products that, among other limitations, are unable to geofence users by jurisdiction.

One potential answer would be meaningful and carefully considered federal regulation. However, progress on that front has been elusive. To date, the federal government has only begun to produce guidelines and frameworks to signal best practices in the field.

For example, the White House established an AI research office and recently released a blueprint for an AI bill of rights.[10] The National Institute of Standards and Technology, housed within the U.S. Department of Commerce, has also released an AI risk management framework.[11]

However, both are voluntary. More recently, the Commerce Department's National Telecommunications and Information Administration requested public comments on developing an AI accountability policy.[12]

On the legislative front, Sen. Ron Wyden, D-Ore., introduced the Algorithmic Accountability Act[13] in the 117th Congress, which would have required companies utilizing AI to conduct critical impact assessments of the automated systems they use and sell in accordance with regulations promulgated by the Federal Trade Commission.

Wyden has indicated[14] that he plans to again introduce the Algorithmic Accountability Act in 2023, but there is no indication that it will be enacted this session.

Rep. Ted Lieu, D-Calif., has also called upon[15] Congress to regulate AI. Lieu introduced a congressional resolution[16] drafted entirely by ChatGPT — allegedly the first of its kind — and is working on legislation to establish a federal agency to oversee AI.

Sen. Chris Murphy, D-Conn., has also tweeted[17] about the risk of AI and our lack of preparation for the changes that this new technology will bring.

Given concerns that generative AI could pose a significant risk to everything from children's safety to election integrity, new legislation is likely to be introduced, even if the likelihood of any such legislation becoming law currently seems remote in today's divided Congress.

In the absence of new federal regulation, regulators are likely to use the existing tools they have to combat perceived misuse or abuses of AI.

The FTC, for example, recently put out a public warning[18] to companies that false or deceptive claims regarding AI capabilities could result in enforcement actions, and FTC Commissioner Alvaro Bedoya commented[19] at a recent public event that companies that make deceptive claims about their AI products or injure consumers through unfair practices can be held accountable under Section 5 of the FTC Act.

The FTC is unlikely to be the only regulator currently looking at its enforcement toolkit with an eye to emerging issues in AI, and Jonathan Kanter, who leads the U.S. Department of Justice's Antitrust Division, has stated publicly[20] that the DOJ is already paying close attention to companies and products in the AI space.

As with several other regulatory issues in the technology space, the EU is moving more quickly than the U.S. government to regulate AI.

The EU Commission introduced a regulatory proposal[21] in April 2021, with the hope that it would enter into force in 2023. There would then be a transitional period in which standards would be further developed and enforcement could begin sometime in 2024.

However, recent reports[22] suggest that the dynamic pace of innovation in AI — and specifically the capabilities of OpenAI's ChatGPT — has sent EU lawmakers back to the drawing board. In the absence of an EU-wide approach, Italy recently banned[23] ChatGPT on account of privacy concerns raised by the product.

Regulation and enforcement actions in the AI space are likely to increase in the coming year as legislators and regulators struggle to understand this new technology and how it fits within existing regulatory schemes.

Companies will need to pay careful attention to where their product is available to consumers and what regulations are applicable in each jurisdiction.

Innovation may quickly outpace regulation, but lawmakers and regulators are unlikely to sit idly by as this quickly developing technology transforms how we work, create, learn and communicate.



Emily Loeb and Caroline Cease are partners, and Benjamin Hand is an associate, at Jenner & Block LLP.

Jenner & Block partner Adam Unikowsky contributed to this article.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.


[1] https://www.ncsl.org/technology-and-communication/legislation-related-to-artificial-intelligence.

[2] https://custom.statenet.com/public/resources.cgi?id=ID:bill:CO2022000S113&ciq=ncsl&client_md=c09fcddf2a99232aa515279f4b78cf84&mode=current_text.

[3] https://custom.statenet.com/public/resources.cgi?id=ID:bill:IL2021000H645&ciq=ncsl&client_md=d49cc5fd8ce6386c082c848a14eeb265&mode=current_text.

[4] https://custom.statenet.com/public/resources.cgi?id=ID:bill:IL2021000H53&ciq=ncsl&client_md=cf812e17e7ae023eba694938c9628eea&mode=current_text.

[5] https://custom.statenet.com/public/resources.cgi?id=ID:bill:WA2021000S5693&ciq=ncsl&client_md=06eee3bf1a7c3a1f44f82a57f3540239&mode=current_text.

[6] https://custom.statenet.com/public/resources.cgi?id=ID:bill:VT2021000H410&ciq=ncsl&client_md=d9744d8eb4dbb213bebb222c496a20a6&mode=current_text.

[7] https://www.legis.state.pa.us/CFDOCS/Legis/PN/Public/btCheck.cfm?txtType=HTM&sessYr=2023&sessInd=0&billBody=H&billTyp=B&billNbr=0049&pn=0038.

[8] https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB331.

[9] https://advocacy.shrm.org/SHRM-2022-Automation-AI-Research.pdf?_ga=2.112869508.1029738808.1666019592-61357574.1655121608.

[10] https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

[11] https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

[12] https://ntia.gov/sites/default/files/publications/ntia_rfc_on_ai_accountability_final_0.pdf.

[13] https://www.wyden.senate.gov/news/press-releases/wyden-booker-and-clarke-introduce-algorithmic-accountability-act-of-2022-to-require-new-transparency-and-accountability-for-automated-decision-systems.

[14] https://www.wyden.senate.gov/news/press-releases/wyden-calls-for-accountability-transparency-for-ai-in-remarks-at-georgetown-law.

[15] https://www.nytimes.com/2023/01/23/opinion/ted-lieu-ai-chatgpt-congress.html.

[16] https://lieu.house.gov/media-center/press-releases/rep-lieu-introduces-first-federal-legislation-ever-written-artificial.

[17] https://twitter.com/ChrisMurphyCT/status/1640186536825061376?s=20.

[18] https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.

[19] https://www.law360.com/articles/1594171/ai-may-be-new-but-it-s-not-unregulated-ftc-s-bedoya-says.

[20] https://www.axios.com/2023/03/13/doj-kanter-ai-artificial-intelligence-antitrust.

[21] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

[22] https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/.

[23] https://www.reuters.com/technology/italy-data-protection-agency-opens-chatgpt-probe-privacy-concerns-2023-03-31/.

For a reprint of this article, please contact reprints@law360.com.