5th Circ. Eyes Barring AI Use, Mandating Accuracy Check

This article has been saved to your Favorites!
Attorneys before the Fifth Circuit may soon have to inform the federal appeals court that their documents were not written using generative artificial intelligence programs and, if they were, that they were reviewed by humans for accuracy.

The court is accepting comments on the proposed change to its certificate of compliance rules for attorneys and pro se litigants through Jan. 4. Under the proposed rule, "material misrepresentation" of whether generative AI was used in a court document may result in sanctions and the court tossing the document.

The new rule would require lawyers and those filing without representation to certify that no generative AI tools were used when drafting the document they are filing. If a program was used, they must promise that all the text, including citations and legal analysis, were reviewed for accuracy and approved by a human.

The court is considering the rule change as the legal world continues to grapple with the use of AI tools.

In June, U.S. District Judge P. Kevin Castel of the Southern District of New York sanctioned two personal injury attorneys for submitting a brief written by artificial intelligence that cited nonexistent case law.

The judge found attorneys Peter LoDuca and Steven Schwartz of the personal injury boutique Levidow Levidow & Oberman PC "abandoned their responsibilities" to check their work, and their behavior rose to "bad faith" when they then waited weeks to finally admit to the incident.

"Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance," Judge Castel wrote. "But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings."

Earlier this year, U.S. District Judge Brantley Starr of the Northern District of Texas began requiring attorneys before him to attest they wrote those filings themselves without the help of platforms like ChatGPT. If lawyers do use AI tools to craft their filings, they will have to certify that those filings were checked for accuracy "by a human being," he said.

While AI platforms are powerful and may have many valuable uses when it comes to the law, "legal briefing is not one of them," Judge Starr said in his order. He added that the tools are prone to inventing quotes and citations.

"Another issue is reliability or bias," the judge added. "While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath."

--Editing by Alyssa Miller.

For a reprint of this article, please contact reprints@law360.com.



Law360 Law360 UK Law360 Tax Authority Law360 Employment Authority Law360 Insurance Authority Law360 Real Estate Authority Law360 Healthcare Authority Law360 Bankruptcy Authority


Social Impact Leaders Prestige Leaders Pulse Leaderboard Women in Law Report Law360 400 Diversity Snapshot Rising Stars Summer Associates

National Sections

Modern Lawyer Courts Daily Litigation In-House Mid-Law Legal Tech Small Law Insights

Regional Sections

California Pulse Connecticut Pulse DC Pulse Delaware Pulse Florida Pulse Georgia Pulse New Jersey Pulse New York Pulse Pennsylvania Pulse Texas Pulse

Site Menu

Subscribe Advanced Search About Contact