Grok Makes Child Abuse Images For XAI's Profit, Victims Say

(March 16, 2026, 8:17 PM EDT) -- Elon Musk's xAI puts profits above all else by knowingly serving pedophiles who use the Grok generative artificial intelligence platform to transform ordinary photographs of children into child sexual abuse material they can trade with other predators across the internet, according to a lawsuit filed Monday in California federal court.

X.AI Corp. knows Grok is being used to create child sexual abuse material that is then trafficked online all over the world, but unlike OpenAI, Anthropic and Meta Platforms Inc., xAI "has made explicit content part of Grok's DNA" without any safeguards against using photos of real children to create abusive images and videos, according to the proposed class action filed by three Jane Doe plaintiffs.

"Like a rag doll brought to life through the dark arts, this child can be manipulated into any pose, however sick, however fetishized, however unlawful," the complaint states. "To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse."

While other AI companies recognized the dangers of their technology being misused by child sex predators and put industry-standard guardrails in place, xAI — and its founder Musk — chose not to, the plaintiffs claim.

Instead, xAI had its eye only on the profits it could make from Grok's "spicy mode," which "would respond to prompts to create sexual content with a person's real image or video," the complaint states. Since this capability was launched, users have generated millions of sexualized images, according to the suit.

For instance, the Center for Countering Digital Hate reviewed a random sample of 200,000 images of the 4.6 million images Grok produced between Dec. 29, 2025, and Jan. 8, estimating that the AI generated 3 million sexualized images, including 23,000 that appeared to depict minors, according to the complaint.

And despite several instances of people raising the alarm about Grok creating child sexual abuse material, Musk "made light of the trend to put women and children in bikinis," posting on X, "Poor Grok," with a laughing emoji on Jan. 2, the suit states.

XAI recently limited Grok's image- and video-editing and generation abilities to paid subscribers, which limits but does not prevent the creation of child sexual abuse material, according to the suit.

"It merely ensures that xAI will profit from all such content," the plaintiffs say.

According to the complaint, Jane Doe 1 in December 2025 received a message from an anonymous Instagram account telling her someone was disseminating pictures of her and other minor girls on Discord. When she saw the AI-generated images and videos, Doe says, she was "immediately disturbed by the sexually explicit content."

"Jane Doe 1 was taken aback at the verisimilitude of the depictions: other than the fact that she knew she had never been in those situations or done those things, she could not visually distinguish these images and video as fake; they resembled real-life content in every way," the suit states.

She says she recognized some of the photographs that were used to make the images, including one that had been taken of her at a high school dance and another that was taken with her family. The person who manipulated her photos with xAI received the photos from Doe after taking advantage of her trust, the suit contends.

Doe, who alerted girls she recognized in other altered images on the Discord platform, then contacted local law enforcement and a criminal investigation was opened, according to the complaint. Police arrested the perpetrator late last December and conducted a search of his phone, the suit states.

The suspect traded the AI-generated child sexual abuse material online for sexually explicit content of other children, according to the complaint.

The other two plaintiffs, Jane Doe 2 and Jane Doe 3, learned through the criminal investigation that the suspect had also used their photographs to create AI-generated child sexual abuse material through xAI, the suit states.

The AI-generated images of all three plaintiffs have since been entered into a national database managed by the National Center for Missing & Exploited Children, which means they will be notified every time their files are identified as part of any criminal investigation, according to the suit.

"This means that for the rest of plaintiffs' lives they will likely receive periodic NCMEC notifications alerting them that criminal defendants have possessed, received or distributed CSAM files depicting them, subjecting them to constant waves of extreme stress and anxiety," the complaint states.

"Perhaps even more distressing to plaintiffs, however, is the trafficking of their CSAM images that will remain undetected by law enforcement," it continues. "The trading of their CSAM files will now almost certainly continue as other pedophiles, in turn, use their CSAM files to barter in the dark world of online CSAM trafficking."

The plaintiffs want to represent a nationwide class of potentially thousands of children "who had real images of themselves as minors altered by xAI/Grok to produce sexualized images or videos with their faces and/or other distinguishing features reasonably identifiable," according to the complaint.

The suit asserts claims including production with the intent to distribute child sexual abuse material, distribution of child sexual abuse material and possession of child sexual abuse material, beneficiary liability under the Trafficking Victims Protection Act, violations of California's Statutory Right of Publicity and Unfair Competition Law, strict liability and negligence for design defect, intentional infliction of emotional distress, and public nuisance.

It seeks damages, restitution, disgorgement, litigation costs, attorney fees and injunctive relief, among other things.

"These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool and then traded among predators," Annika K. Martin of Lieff Cabraser Heimann & Bernstein LLP said in a statement Monday. "Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it."

"Without xAI, this harmful, illegal content could never, and would never, have existed," she added. "The lives of these girls have been shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience."

A representative for xAI did not immediately respond to a request for comment Monday.

This is far from the first time Musk's AI chatbot has been accused of facilitating the creation of nonconsensual sexually explicit materials, including "deepfake" images used to harass people online.

California Attorney General Rob Bonta has demanded xAI stop the creation and distribution of such materials, while a trio of senators called on Apple Inc. and Google LLC to remove Grok as well as Musk-owned social media platform X from their app stores until Musk adequately addresses the issue.

That same month, a group of 35 attorneys general sent a letter to xAI to demand stronger action curtailing Grok from altering pictures to be sexually explicit or revealing.

Across the pond, the United Kingdom's Information Commissioner's Office is investigating personal data processing within Grok and its potential to produce harmful sexualized deepfakes. The European Commission has also launched its own investigation into Grok under the European Union's Digital Services Act to assess risks related to the dissemination of illegal content.

Influencer Ashley St. Clair, the mother of one of Musk's children, launched a lawsuit against xAI, claiming she was depicted in sexually explicit imagery generated by Grok without her consent and that xAI has "chosen to willfully turn a blind eye and even celebrate" similar sexual exploitation.

Another lawsuit contends xAI not only failed to implement safeguards against users making sexually explicit deepfakes of women without their permission but has also openly advertised and monetized it as a feature.

The plaintiffs in the current suit are represented by Annika K. Martin, Mark P. Chalos, Betsy A. Sugar and Michelle A. Lamy of Lieff Cabraser Heimann & Bernstein LLP, and Vanessa Baehr-Jones of Baehr-Jones Law PC.

Counsel information for xAI was not immediately available.

The case is Jane Doe 1 et al. v. X.AI Corp. et al., case number 5:26-cv-02246, in the U.S. District Court for the Northern District of California.

--Additional reporting by Rae Ann Varona, Allison Grande, Matthew Santoni, Eddie Beaver, Hailey Konnath and Mike Curley. Editing by Kristen Becker.

For a reprint of this article, please contact reprints@law360.com.