Lawyers see the potential for a senior manager to get tripped up by crises such as a wrongful selling scandal driven by artificial intelligence. (iStock.com/Natee127)
The problem, as lawyers see it, is not just that the way in which AI operates is not fully transparent. The FCA's senior managers regime is unclear on the implied responsibility for managers if AI causes damage on their watch. Guidance called for by the committee may help fill the gap, but some observers want new rules to be introduced.
"The difficulty is that the senior managers and certification regime was not designed with artificial intelligence in mind," Rekha Cooke, a partner at Kennedys, said. "It imposes a deliberately high standard of personal accountability but offers little concrete guidance on how that standard is meant to be met where firms deploy complex, adaptive or opaque systems."
Senior Managers Unclear About FCA Rules
This leaves senior managers exposed to the FCA's judgment after events might have taken place, rather than being guided by clear expectations in advance. The FCA has made it clear that "I did not understand it" will not be accepted as a defense.
Lawyers see the potential for a senior manager to get tripped up by, for example, a wrongful selling scandal driven by AI. A bank could use a partly opaque AI model to automatically assess a customer's risk tolerance and suitable investments in selling campaigns.
And, if the bank relies on such a model to sell a risky asset such as a crypto-fund or mini-bond, senior managers are likely themselves to rely on expert accuracy assessments — which could be wrong.
Flawed AI could wrongly classify a batch of consumers as suitable buyers. The bank could sell to thousands of people — and heads will roll if the product collapses.
"The FCA could frame the issue as a failure of systems and controls and pursue enforcement against the senior manager responsible for retail distribution on grounds they failed to take reasonable steps to prevent foreseeable consumer harm," Cooke warned.
Heads Of Compliance Exposed
Company heads of compliance could find themselves under investigation if the FCA decides that the compliance framework allowed them to rely on a black-box AI system without sufficient challenge or testing.
"If the FCA accepts that the legal duty is to govern the risk, not to perform miracles inside a black box, the regime can operate sensibly," Cooke said. "The present uncertainty is that the FCA is talking like it expects senior managers to do both."
Anybody with doubts that regulators will act on any scandal linked to AI or other technology has only to look at their track record.
The Prudential Regulation Authority, the Bank of England's regulatory arm, fined TSB Bank PLC's former chief information officer £81,000 ($102,000) in 2023, finding that he had failed to adequately manage an IT platform changeover. Carlos Abarca had breached a conduct rule for senior managers.
Lawyers declined to comment on that case. But they foresaw that AI-driven IT systems gone wrong could leave senior managers in the firing line.
"We have seen the regulators use the senior managers and certification regime against individuals before in matters involving large scale IT outages," David Pygott, a partner at Addleshaw Goddard LLP, said.
"At the very least, a failure to provide proper oversight over the deployment of AI tools, resulting in a loss of control or consumer harm, could put a senior manager in breach of conduct rules," Pygott warned. "Beyond that, the position is far less clear."
AI Risk In The Motor Redress Program
Some lawyers warn that the calculations by lenders of car finance in the FCA's redress program represent an AI accident waiting to happen. Senior managers, who are required to sign attestations that their company systems work, are on the hook.
The public outcry over undisclosed commissions in auto-financing led to the redress scheme, which was enabled by a landmark decision from the U.K. Supreme Court partly in favor of claimants in August 2026. The decision means that the FCA risks its own reputation if it does not punish senior managers who get it wrong.
The FCA's consultation on the compensation program shows that the regulator expects lenders to develop automated tools. Some lenders could use these to sift through historical loan agreements and decide who is due compensation.
"If a firm's automated processes result in significant redress omissions or weak oversight, the FCA could view that as a systems and controls failure," Harper James partner John Pauley said.
That could open the floodgates for regulatory action using partly retrospective judgments if it shows that senior managers acted on AI they did not understand.
Insurers and Lenders Exposed
Insurers, like lenders, are exposed to the flawed rules. The committee's report identifies that adoption of AI by finance companies without adequate understanding could lead to consumers being denied credit or insurance without understanding why.
In such cases, lenders or insurers might be unable to explain the decision because of a lack of transparency in the AI model.
Another harm identified by the committee comes from decisions driven by AI which unfairly exclude disadvantaged consumers by using related historical data.
Lawyers are unanimous that the FCA should offer guidance on what it expects of senior managers.
"We will review the report carefully," a spokesperson for the watchdog responded.
But some lawyers are calling for new rules to govern generative AI, which creates conclusions by learning patterns. This can be used in insurance and lending decisions.
"The main risk posed by generative AI in financial services is not deliberate misuse, but the risks presented by their complexity," David Rundle, a partner at Bryan Cave Leighton Paisner LLP, said. "Over time, it is likely to require more than high level regulatory guidance to address the risks effectively."
The problem with technology-based rules however, is that the technology might have moved on by the time they come into force.
Bigger Role For Watchdogs
The bigger question is how far regulators should approve some AI tools, given their potentially widespread use in the industry.
"This would represent a major shift in regulatory approach, with AI firms and models coming under direct regulatory scrutiny," James Burnie, financial services regulation and fintech partner at Gunnercooke LLP, said.
Regulators might then start signing off some AI tools as acceptable for widespread use. It would require them to take on responsibility that they are not used to, and with which they might not be comfortable.
But banks will still need more clarity on who is responsible for AI in their business.
"There is a strong case for clearer allocation of AI accountability under the senior managers and certification regime, particularly in larger firms," Charlotte Hill, a partner at Charles Russell Speechlys LLP, said.
But that is distinct from requiring one individual to personally understand every technical detail, a a potential step which most lawyers reject.
Even so, there are limits on how readily AI models can be explained. The enforcement risk arises from understanding AI and also from using the technology without appropriate governance, oversight and challenge.
"AI providers are also quite inconsistent in the way they approach explainability, which doesn't help," Kolvin Stone, a partner at Fox Williams LLP, said.
--Editing by Ed Harris.
For a reprint of this article, please contact reprints@law360.com.