EU Policymakers Clear Way For Passing Of Landmark AI Act

(December 9, 2023, 4:24 AM GMT) -- European Union policymakers on Friday reached an agreement on rules that would put guardrails on businesses' use of artificial intelligence, removing the final major barrier to the bloc enacting the world's first comprehensive law to tackle the potential risks posed by AI systems.

The deal follows marathon negotiations that the European Commission, Parliament and member states engaged in to land on a common text for the landmark EU AI Act. The policymakers met for more than 20 hours on Wednesday and continued their discussions Friday, with Thierry Breton, the European commissioner who helped negotiate the deal, announcing just before midnight in Brussels that the trilogue had achieved success in clearing the way for the EU to become the first to "set clear rules for the use of AI."

"The #AIAct is much more than a rulebook — it's a launchpad for EU startups and researchers to lead the global AI race," Breton wrote in a post on X, formerly Twitter

While the law still needs to go through some final procedural steps for approval, Friday's political agreement marked the last major hurdle to enacting the law, which the European Commission first put forth in April 2021. 

The AI Act proposes taking a risk-based, tiered approach to regulation that includes completely banning certain applications of the technology, including some uses of biometric systems, that pose an "unacceptable risk" to EU residents' fundamental rights, and placing certain requirements on the use of systems that are identified as "high risk."

The AI Act will also introduce dedicated rules that contain more additional binding obligations for general-purpose AI models, which are complex systems that can handle many different tasks and were a potential sticking point in this week's negotiations, as well as create a new European AI Office within the European Commission to supervise the implementation and enforcement of these new rules.

Along with national competent market surveillance authorities, who will have the same enforcement powers at the national level, the new AI Office "will be the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point," according to the European Commission.

Companies that run afoul of these rules will face hefty fines. While the policymakers didn't specify on Friday what amount they had agreed on, the General Data Protection Regulation, another landmark law in the EU, allows regulators to impose fines of 20 million euros, or up to 4% of a company's annual global revenue, whichever is higher.

The European Commission, in a press release issued early Saturday morning in Brussels, welcomed the political agreement, with the commission's President Ursula von der Leyen calling it "a historic moment."

"The AI Act transposes European values to a new era," von der Leyen said. "By focusing regulation on identifiable risks, today's agreement will foster responsible innovation in Europe. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI." 

According to the European Commission, the "vast majority" of AI systems, including AI-powered spam filters or recommendation systems, will fall into the minimal risk category. These applications will be given a "free-pass" from regulation, as they present "only a minimal or no risk for citizens' rights or safety," although the companies behind them can still choose to commit to codes of conduct for these systems voluntarily, according to the commission. 

AI systems that are identified as high-risk — which would include biometric identification, categorization and emotion recognition systems; medical devices; technology used to assess customers' lending risk or insurance needs; automated methods used for recruiting people; and certain systems used by law enforcement and border control — would be required to comply with strict requirements. 

These mandates include putting in place risk-mitigation systems, undertaking detailed documentation, providing users with clear information about how the technology works, and instituting a "high level of robustness, accuracy and cybersecurity," according to the European Commission.  Additionally, the law will require companies to clearly label AI-generated content such as chatbots and deepfakes so that users are aware that they're interacting with a machine and to inform consumers when biometric categorization or emotion recognition systems are being used.

The law is also poised to establish a third category of "unacceptable risk" systems that present "a clear threat to the fundamental rights of people" and would be banned. This technology includes AI systems or applications that manipulate human behavior to circumvent users' free will, such as voice assistance-enabled toys that promote dangerous behavior, certain applications of predictive policing and biometric technologies, such as emotion recognition systems used at the workplace and some systems for categorizing or identifying people in publicly accessible spaces. 

The political agreement reached Friday will now head to the European Parliament and the European Council, which is a collective of member state leaders, for formal approval, according to the commission. 

Once the AI Act is adopted, there will be a transitional period before it goes into force, the commission noted, adding that to bridge this time, the commission will be launching an AI pact that "will convene developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines." The EU will also continue to work in global gatherings such as the G7 and United Nations to promote rules on trustworthy and responsible AI use at an international level, the commission added. 

Ashley Casovan, managing director of the International Association of Privacy Professionals' Artificial Intelligence Governance Center, said in a statement provided to Law360 Friday that this week's grueling negotiations "signal the monumental nature of the EU AI Act."

"It will have a massive impact on all aspects of the global digital economy," Casovan said. 

While the GDPR, which took effect in May 2018 and also served as a global model as the world's first comprehensive consumer privacy law, "changed the digital economy" by introducing massive fines and prompting shifts in both compliance approaches and business models, "we expect the impact here will be bigger," according to Casovan. 

"The EU AI Act will require greater efforts to operationalize than the GDPR, demanding organizations address the risks of powerful new technologies that they are just beginning to understand and implement," Casovan said. "Hundreds of thousands of AI governance professionals are urgently needed to ensure AI systems are developed, integrated and deployed in line with the EU AI Act and emerging AI laws globally."

--Editing by Jay Jackson Jr.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!