This article has been saved to your Favorites!

How AI May Be Used In Fintech Fraud — And Fraud Detection

By Jacques Smith, Mattie Bowden and Rebekkah Stoeckler · 2024-03-15 16:29:04 -0400 ·

Jacques Smith
Jacques Smith
Mattie Bowden
Mattie Bowden
Rebekkah Stoeckler
Rebekkah Stoeckler
Artificial intelligence and machine learning have revolutionized sectors across the globe. As industries embrace these technologies and a host of innovative ideas, however, heavily regulated industries will face an increasingly complex landscape of liability, regulation and enforcement.

Recently, the government has pursued liability for firms and individuals in the cryptocurrency and finance sectors who oversold their AI application's capabilities to induce payments from their investors or customers.

These actions, discussed below, should serve as a cautionary tale to those involved in securities or cryptocurrency trading given that the use of AI and machine learning will increasingly form the basis of government claims of fraud and abuse in finance and fintech industries.

Meanwhile, the federal government continues to utilize its own AI and machine learning innovations to enforce anti-fraud regulations, and detect and investigate fraud and abuse.

AI-Enabled Fraud in Fintech

AI and machine learning have enabled remarkable advancements in the fintech, banking and finance, and healthcare industries.

Ubiquitous to these industries is the accumulation of mass data — large, complex, fast-moving or weakly structured data. In the finance and banking sectors, mass data empowers AI and machine learning to transform services, introducing automated trading, risk management, customer service via chatbots and predictive analytics for future market trends.

Similarly, in fintech, AI and machine learning have played a crucial role in developing cryptocurrencies, algorithmic trading and blockchain technologies.

AI and machine learning certainly have the potential to revolutionize industries; however, the government has a demonstrated appetite to pursue liability for those who fraudulently oversell their AI application's capabilities, or fraudulently exploit their AI application's capabilities or the data that drives it.

For example, government authorities have pursued civil and criminal fraud charges for individuals or firms that oversell their AI or machine learning capabilities to attract and defraud investors.

Recently, the U.S. Securities and Exchange Commission charged Brian Sewell, the owner of an online crypto trading course, in connection with an alleged scheme to fraudulently induce students to invest in Sewell's hedge fund.[1]

According to the SEC, Sewell claimed that the investment strategies for the hedge fund would be guided by cutting-edge AI and machine learning. Instead of launching the fund or executing the trading strategies, Sewell held approximately $1.2 million from his students as bitcoin before his digital wallet was hacked and emptied.

Further, in December 2023, the U.S. Department of Justice charged David Saffron and Vincent Mazzotta with fraud, alleging that the two men conspired to induce individuals to invest in trading programs by falsely promising that the programs used automated AI to trade investments in cryptocurrency markets and return high-yield profits.[2]

Then, according to the DOJ, "[r]ather than investing victims' funds in cryptocurrency, [the defendants] allegedly misappropriated victims' funds" to pay for luxury personal expenses.[3]

Fundamental to the DOJ's charges in both of these fraud cases were the defendants' alleged use of false promises regarding the capabilities and use of AI and machine learning applications. It was these false promises that allegedly fraudulently induced their victims to part with large sums of money.

As the use of AI becomes ubiquitous across industries, heavily regulated industries can expect to see more government insight and enforcement. In fact, Deputy Attorney General Lisa Monaco recently stated that "[l]ike a firearm, AI can enhance the danger of a crime," and thus, the DOJ will seek harsher penalties for offenses made more harmful by the misuse of AI.[4]

Further, in recent remarks at Yale Law School, SEC Chair Gary Gensler promised that those who deploy AI to sell securities by fraud or misrepresentation should expect "war without quarter."[5]

This should be particularly concerning if the findings of a recent Berkeley Research Group survey of healthcare professionals are a bellwether for other heavily regulated industries like finance and fintech: It found that 75% of all healthcare professionals surveyed expected the use of AI to be widespread within three years, while "only 40% [had] reviewed regulatory guidance in preparing to implement the technology."[6]

Collectively, as heavily regulated industries embrace AI and machine learning tools, we can expect AI to serve as an increasingly common basis for government fraud charges.

AI-Enabled Fraud Detection

As with any other tool, AI can be harnessed for the common good. Both government agencies and industries are increasingly leveraging AI and machine learning to combat fraud effectively. By leveraging advanced algorithms and machine learning techniques, AI systems can sift through vast amounts of data to identify subtle indicators of potential fraud before it occurs.

It is also common for banks, healthcare facilities and other firms to use AI-powered software to monitor customers' financial transactions and employees' compliance with policies and regulations. The government's AI tools for fraud detection and resource management are similar.

For example, the IRS and the U.S. Department of the Treasury's Financial Crimes Enforcement Network use AI-powered software to quickly complete tasks that are typically laborious and prone to human error.

AI and machine learning-powered risk assessment tools harness the agencies' troves of mass data to enable more accurate and economical detection of fraud.[7] Likewise, the U.S. Department of Health and Human Services uses AI to identify fraudulent billing practices in Medicare and Medicaid.[8]

The DOJ has also deployed AI and machine learning, for example, to trace and classify the sources of illegal drugs, and to triage and understand the millions of tips the FBI receives.[9]

Meanwhile, Jonathan Mayer, the DOJ's chief AI officer and chief science and technology adviser, is tasked with ensuring that the DOJ is "prepared for both the challenges and opportunities that new technologies present," according to a press release announcing his appointment.[10]

In the coming years, the government and all industries will undoubtedly continue to utilize and develop AI and machine learning to achieve better outcomes, detect criminal or fraudulent conduct, increase worker productivity, and maximize advantages.

As the government grapples with creating sophisticated regulatory frameworks to keep pace with industries and to prevent AI-enabled fraud and abuse, it is crucial that companies' leadership work closely with their innovators to ensure that the use of AI and machine learning does not run afoul of the existing and burgeoning regulations.

The future of AI and machine learning in regulated industries promises to be a dynamic and evolving landscape, but it requires savvy legal, regulatory and compliance know-how to avoid facing liability for AI-enabled fraud.



D. Jacques Smith is a partner at ArentFox Schiff LLP, and co-leads the firm's complex litigation practice and co-chairs the investigations group.

Mattie Bowden is an associate at the firm.

Rebekkah Stoeckler is an associate at the firm.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.


[1] https://www.sec.gov/news/press-release/2024-13.

[2] https://www.justice.gov/opa/pr/two-men-charged-operating-25m-cryptocurrency-ponzi-scheme.

[3] See also https://www.justice.gov/opa/pr/empiresx-head-trader-pleads-guilty-global-cryptocurrency-investment-fraud-scheme-amassed. (In 2022, Joshua Nicholas plead guilty to conspiring to commit securities fraud in connection with a cryptocurrency-based scheme where he and his co-conspirators made fraudulent guarantees to investors and misrepresentations about a purported propriety bot that would use AI to maximize profit.). 

[4] https://www.justice.gov/opa/speech/deputy-attorney-general-lisa-o-monaco-delivers-remarks-university-oxford-promise-and.

[5] https://www.sec.gov/news/speech/gensler-ai-021324.

[6] https://assets.law360news.com/1807000/1807257/brg-report-ai-and-the-future-of-healthcare.pdf.

[7] https://home.treasury.gov/news/press-releases/sm663.

[8] https://law.stanford.edu/wp-content/uploads/2020/02/ACUS-AI-Report.pdf; https://www.fau.edu/newsdesk/articles/medicare-fraud-big-data.

[9] https://www.justice.gov/opa/speech/deputy-attorney-general-lisa-o-monaco-delivers-remarks-university-oxford-promise-and.

[10] https://www.justice.gov/opa/pr/attorney-general-merrick-b-garland-designates-jonathan-mayer-serve-justice-departments-first.

For a reprint of this article, please contact reprints@law360.com.