In the agency's latest enforcement action targeting the accuracy of claims related to AI, the commission alleged that Workado LLC violated Section 5 of the FTC Act by promoting its AI Content Detector as "98 percent" accurate in identifying whether text was written by AI or a human even though independent testing revealed that the tool's accuracy rate on general-purpose content was 53%.
The proposed order resolving these allegations requires Arizona-based Workado to take several steps to ensure it does not engage in similar "false, misleading or unsupported" advertising in the future, including refraining from making any representations about the effectiveness of its products unless it has "competent and reliable evidence" to back up the claim at the time it was made.
"Consumers trusted Workado's AI Content Detector to help them decipher whether AI was behind a piece of writing, but the product did no better than a coin toss," Chris Mufarrige, director of the FTC's Bureau of Consumer Protection, said in a statement Monday. "Misleading claims about AI undermine competition by making it harder for legitimate providers of AI-related products to reach consumers."
The agency, which is currently helmed by three Republican commissioners, voted unanimously to issue the administrative complaint and to accept the consent decree, which will now be subject to a 30-day public comment period.
A representative for Workado, which was formerly known as Content At Scale AI, couldn't be reached for comment Monday.
According to the agency's administrative complaint, Workado marketed and sold its AI Content Detector to consumers who were seeking to determine whether online content was developed using ChatGPT or similar generative AI products, or if it was written by a human being.
While the company claimed its AI Content Detector was trained on a wide range of material, including blog posts and Wikipedia entries, to boost its accuracy, the AI model powering this tool was only fine-tuned to effectively classify academic content, according to the FTC, which contended that these false and unsubstantiated performance claims constituted an unfair or deceptive practice in violation of Section 5.
Although Section 5 doesn't allow the FTC to obtain monetary penalties for first-time violations, the commission did secure an order requiring Workado to halt such allegedly unsubstantiated claims, retain any evidence it uses to support such efficacy claims, notify eligible consumers about its settlement with the FTC and submit compliance reports to the commission one year after the order is issued and then annually for the following three years.
The order comes on the heels of FTC Commissioner Melissa Holyoak last week confirming that AI was one of the agency's top priorities under the administration that took over in January, when Republican Commissioner Andrew Ferguson was tapped to be the agency's new chairman. Since then, President Donald Trump abruptly fired the agency's two Democratic commissioners, and the U.S. Senate has confirmed Mark Meador to fill its third Republican seat.
During her keynote address on April 22 at the IAPP's Global Privacy Summit in Washington, D.C., Holyoak pledged that the commission would continue to "aggressively root out AI-powered frauds and scams and stop companies from making false or unsubstantiated representations that harm consumers" although she stressed that flexibility is needed to avoid "misguided enforcement actions or excessive regulation" that could stifle innovation and competition in the emerging field,
While Monday's action marks the latest brought since Ferguson took over at the FTC, the commission was also active on this issue during the prior Democratic Chair Lina Khan's tenure.
In September 2024, the FTC revealed a flurry of enforcement actions aimed at cracking down on the use of AI to "supercharge" harmful and deceptive business practices, as part of an enforcement sweep called Operation AI Comply.
The cases include one against DoNotPay, a company that offered "AI Lawyer" services that failed to live up to its billing as being "the world's first robot lawyer," as well as matters targeting businesses that promoted an AI tool that allegedly aided customers in creating fake reviews and that falsely claimed that the emerging technology could be used to help consumers make money through online storefronts.
--Editing by Kristen Becker.
For a reprint of this article, please contact reprints@law360.com.