Analysis

AI-Driven Fake Evidence Could 'Play Havoc' In Legal Disputes

(June 23, 2025, 4:24 PM BST) -- A recent High Court judgment exposed how nonexistent artificial intelligence-generated citations had been used in legal arguments — but experts say this could be the tip of the iceberg for increasingly sophisticated fake evidence making its way into disputes.

AI and face swap in video edit

A High Court judge has warned that lawyers who refer to nonexistent cases after they use generative AI tools could face contempt charges. (iStock.com/Tero Vesalainen)

Lawyers are increasingly alive to the risks of AI-generated evidence that is becoming indistinguishable from its non-AI counterparts. They are also aware that AI being is being used to manipulate videos and images and create things that appear real but are actually fake.

"The thing about tech is it evolves at such a pace that it can be used across the dispute spectrum to create havoc in any part of the proceedings," Joel Seager, a Fladgate LLP partner, said.

"Any role where you are trying to present evidence creates the opportunity for nefarious activity," Seager continued. "Any time you're relying on someone to say something, there's an opportunity to create almost a fake person to do that for you."

The dangers were illustrated by a High Court ruling on June 6 that referred a barrister and solicitor to their professional regulators over two cases in which dozens of case-law citations were put before the court that either did not exist or contained made-up passages.

Judge Victoria Sharp wrote that generative AI tools such as ChatGPT "are not capable of conducting reliable legal research" and warned that lawyers who refer to nonexistent cases could face contempt charges.

Gareth Tilley, a barrister at Serle Court, told an international disputes event in London days before the ruling that it was a "sorry state of affairs" that lawyers were using generative AI tools to produce written legal arguments or witness statements which are not then checked, leading to false information being put before the court.

However, fake authorities are easily spotted — usually when the opposing solicitor tries to find them — and have so far been dealt with by the courts with orders for wasted legal costs, Tilley said.

The panel at the event on June 4, held during International Disputes Week, instead highlighted how the ability to manipulate emails, contracts, and multimedia content is now accessible to anyone through common software and apps and is giving rise to more convincing fake evidence being put before the courts.

A recent claim at the High Court exposed how a fake death and marriage certificates had been used by a convicted fraudster to try to seize a Nigerian woman's London home. June Ashimola, who supposedly died in 2019, appeared via video link before High Court Judge John Linwood to declare that she was alive and was a victim of a scam. The claim alleged that Ashimola had been married to a man named Bakare Lasisi, who it later emerged, never existed.

"Almost anything can now be altered and manipulated in the way to communicate the kind of message that we might want," Vijay Rathour, a Grant Thornton LLP partner, said.

Rathour showed how generative AI can be used to create realistic videos and manipulate existing footage, almost in real time. He highlighted how an employee of British engineering company Arup had been tricked in 2024 into sending $20 million to the bank accounts of criminals in a deepfake video conferencing scam.

"Deepfakes" are video clips and audio recordings created by AI that are fake but appear to be real. The Hong Kong-based employee was duped by when he joined a call with a digitally cloned version of a senior manager and other fake employees.

Rathour said the technology is "getting so realistic that it's almost impossible now to distinguish between reality and fake. You can imagine the amount of digital evidence, the amount of video evidence, the amount of audio evidence that we're increasingly going to be exposed to can only increase. We can only expect that the burden of proving that authenticity is going to increase."

High Court Judge Christopher Butcher was confronted in 2024 with a claim that stemmed from a nonexistent arbitration agreement, which could have been even harder to uncover had AI been used in the deception.

In that case an entity called Contax Partners Inc BVI tried to obtain more than £70 million from a banking group associated with the Kuwaiti sovereign wealth fund through a fabricated arbitration claim. This involved the manipulation of a real court judgment.

Judge Butcher initially granted Contax the award before later discovering large parts of the arbitration decision, supposedly issued in Kuwait, were actually drawn from a 2022 decision made by High Court Judge Simon Picken in a different case where the names had been diligently changed.

"What would have happened if they had drafted something from scratch?" Tilley of Serle Court asked at the disputes week event. "Would the claimants have been able to clear that summary hurdle and force everyone to proceed to a trial where they would have to evaluate the merits and call the purported arbitrator to say whether or not there had been an arbitration?"

He said that this "would have been an absolute nightmare for the innocent defendants in the case. A lot of the work in exposing these facts lies with us practitioners going away and doing our homework."

The biggest challenge for solicitors as AI becomes more commonplace will be discerning what has been created by the tech, and who has been altered or created by it.

Seager of Fladgate predicts that "the most ambitious kind of fakery" could one day extend to fake witnesses, involving individuals who say they can't attend hearings and want to attend remotely.

"What's to say that a party wanting to present a witness in a trial would not present an AI image of that person but with someone else actually talking?" Seager suggested.

Beyond fake witnesses, lawyers will also need to be alive to the potential of dealing with fake clients — people who are pretending to be someone else to launder money. This is not in itself new, but going to be a challenge in a different way because of AI, Seagar said.

"It's not that it hasn't been done before. We've had sham litigation and sham arbitration, but the difference is in those days they would create a fake email address, and they would send you fake passports — now we're going to have fake virtual imagery," Seager said.

--Editing by Joe Millis.

For a reprint of this article, please contact reprints@law360.com.