White House Calls Explicit AI Photos Of Taylor Swift 'Alarming'

(January 26, 2024, 10:33 PM EST) -- The White House on Friday called the recent circulation of sexually explicit artificial intelligence-generated images of Taylor Swift "alarming," saying social media companies have a duty to prevent the spread of "nonconsensual intimate imagery of real people," while others said the fake images amount to criminal sexual abuse.

Press Secretary Karine Jean-Pierre addressed the explicit AI-generated images, also known as deepfakes, circulating online that supposedly depict the 12-time Grammy Award-winning musician, saying in a press briefing Friday that President Joe Biden has made the issue of reducing the risks of AI a priority.

"It is alarming," Jean-Pierre told reporters at the briefing. "While social media companies make their own decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and nonconsensual intimate imagery of real people."

"Sadly, too often, we know the lack of enforcement disproportionately impacts women and girls, who are the overwhelming targets of online harassment and abuse," she added.

While she said Congress should take action to address deepfake images and other risks stemming from AI, Jean-Pierre noted that Biden in October issued a sweeping executive order setting out a road map for protecting consumers and workers from privacy, discrimination and other potential harms presented by the widespread deployment of AI.

The directive established new standards for AI safety, security and innovation across a range of industries, including technology, banking, education, healthcare, housing and the workplace.

Back in May, Rep. Joe Morelle, D-N.Y., introduced the Preventing Deepfakes of Intimate Images Act, which aims to ban the nonconsensual sharing of digitally altered intimate images online. According to the congressman, a 2019 report found that 96% of deepfake videos online were pornographic and exclusively targeted women.

The bill, H.R. 3106, is still languishing in the House of Representatives, and on Thursday Morelle took to X, formerly known as Twitter, to condemn the deepfake images of Swift.

"The spread of AI-generated explicit images of Taylor Swift is appalling — and sadly, it's happening to women everywhere, every day," he wrote. "It's sexual exploitation, and I'm fighting to make it a federal crime with my legislation: the Preventing Deepfakes of Intimate Images Act."

Stefan Turkheimer, vice president of public policy for nonprofit Rape, Abuse & Incest National Network, said in a statement Friday that more than 100,000 explicit images and videos, like those depicting Swift, are spread online every day.

"It's important to understand that the proliferation of deepfake images like the ones circulating featuring pornographic, AI-generated images of Taylor Swift are, in fact, sexual abuse and criminal acts," he said.

"We are angry on behalf of Taylor Swift, and angrier still for the millions of people who do not have the resources to reclaim autonomy over their images," Turkheimer added.

He also said social media platforms "have a responsibility to protect victims and weed out perpetrators, and RAINN will be there advancing policies to bring an end to this abuse."

--Additional reporting by Allison Grande. Editing by Adam LoBelia.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!