EU Gives Final Approval To World's First AI Rulebook

(May 21, 2024, 6:12 PM BST) -- European Union law negotiators gave the final green light on Tuesday to the first worldwide rules on artificial intelligence across most sectors including financial services, classifying its usage in bank lending risk assessments or insurance underwriting for EU citizens as high-risk.

The Council of the European Union, which negotiates and adopts law, approved the Artificial Intelligence Act covering use of AI in areas ranging from healthcare and employment to telecommunications and public services.

The new rules, in force within weeks, will ban certain AI practices like manipulative techniques, because their risk is unacceptable, and systems that use biometric data like fingerprints to categorize people by, for example, race or religion. The new rules classify some AI applications as high-risk, a designation which will include banks and insurers.

AI systems used by banks to assess credit risk or insurers to assess insurance needs will be classified as high-risk under the act. This means that the financial firms in question must keep detailed records of the AI use under the new rules.

"The adoption of the AI act is a significant milestone for the European Union," Mathieu Michel, Belgian secretary of state for digitalization, said in a statement.

Under the rules, AI used by banks to assess the creditworthiness of individuals will be classified as high-risk, because it decides whether they can use financial services. The risk is that the AI use could lead to discrimination.

AI used by insurers to risk-assess customers wanting health and life insurance and to set the price of premiums is also high-risk. The AI can have a significant impact on customers' livelihood and, if not properly designed, could infringe on their rights.

Banks and insurers using such high-risk AI systems must keep logs automatically generated for at least six months as part of the documentation required under financial services rules. National financial regulators will regulate the AI usage.

AI systems used to detect fraud in financial services or to calculate banks' and insurers' capital requirements will not be high-risk under the rules.

The cost of noncompliance with the act could be very high, with fines up to €35 million ($38 million) or 7% of the company's worldwide annual turnover for the previous financial year, whichever is greater.

The new law foresees that AI regulatory sandboxes should enable a controlled environment for testing of innovative AI systems.

"After all the lengthy negotiations it is a major achievement for the EU to reach consensus on the AI Act," said Patrick van Eecke, head of Cooley LLP's European cyber practice in Brussels.

"Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said.

The General Data Protection Regulation, or GDPR, in effect from May 2018, is a data privacy and security law drafted and passed by the EU. It imposes obligations on organizations anywhere that target or collect data related to people in the bloc.

The Council of the European Union and the European Parliament will sign the Artificial Intelligence Act in June. The legislation will be published in the EU's Official Journal in July, coming into force 20 days later.

The rules will apply across member states only two years from that point, with some exceptions for specific provisions. They will have harmonized application across member states rather than requiring local legislation.

--Editing by Nicole Bleier.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!