UK Reaches Landmark AI Risk Testing Agreement With US

(April 2, 2024, 5:43 PM BST) -- The U.K. government said Tuesday it had reached a landmark agreement with the U.S. to share the testing of advanced models for artificial intelligence, after highlighting in a report its increasing use by cybercriminals to attack financial institutions and business.

The U.K. and U.S. signed Monday a memorandum of understanding in which the AI Safety Institute will collaborate with its U.S. equivalent on the tests. The U.S. Artificial Intelligence Safety Institute houses a linked consortium of members including JPMorgan Chase, Bank of America, Citigroup, Vanguard, Mastercard and Visa.

The agreement comes at a time that the U.K. financial sector is increasingly concerned about AI and its implications for customer service and financial markets The Bank of England found in a survey in March that 14% of banks, insurers and others saw AI as one of the five biggest risks to financial stability, up from 7% a year ago.

"This agreement represents a landmark moment, as the U.K. and the United States deepen our enduring special relationship to address the defining technology challenge of our generation," Michelle Donelan, U.K. secretary for state for science, innovation and technology, said in a statement.

The U.K. and U.S. institutes intend to perform at least one joint testing exercise on a publicly accessible model, according to the government. They will share information, cooperate closely and exchange expert staff.

The U.S. institute houses a consortium of representatives from more than 200 businesses and organizations including banks and finance firms. They are responsible for developing AI related guidance and research.

The partnership between the two institutes starts immediately, aiming to allow seamless cooperation, the government said.

"As the countries strengthen their partnership on AI safety, they have also committed to develop similar partnerships with other countries to promote AI safety across the globe," it said.

The government said that the move makes good on commitments at an event it held in November 2023 known as the AI Safety Summit, including to establish the AI Safety Institute. A month earlier, the government published a paper on frontier AI — breaking new ground — to support the summit.

In that paper, the government said frontier AI could automate legal work and support top wealth managers, but carried risks, including that it could significantly worsen cybercrime. Criminals are already using AI to conduct scams and steal login credentials.

The government also noted the risks of cyberattacks on financial infrastructure. In March, Lloyd's of London, the leading specialty insurance market, warned in a report of the huge risks of AI to businesses today.

The government said in the paper that bias in AI systems was particularly concerning in sectors like financial lending where it could have profound consequences.

The Bank of England in October 2023 raised concerns that financial firms using biased AI could cause reputational and legal risks of the kind that regulators consider in setting capital requirements.  Concerns include the algorithms used in some cases for decisions on credit card applications, seemingly offering less credit to women than men.

The government has asked the Financial Conduct Authority and Bank of England, among other regulators, to publish an update by April 30 on how they will police AI, referring to the government's post-Brexit white paper of March 2023 outlining a cross-sector regulatory framework.

--Additional reporting by Sam Tabahriti. Editing by Robert Rudinger.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!