EU Orders Meta, TikTok To Explain Content Moderation Efforts

(October 19, 2023, 10:28 PM EDT) -- The European Union's executive body on Thursday directed Meta and TikTok to detail within a week how they're complying with new legislation that requires them to do more to take down harmful postings and misinformation, in the wake of reports of the rapid spread of such content amid the Israel-Hamas war. 

In formal requests for information sent to TikTok and Meta, which is the parent company of Facebook and Instagram, the European Commission gave the social media giants until Oct. 25 to explain the measures they've taken to comply with their obligations under the bloc's Digital Services Act to prevent the spread of violent content, hate speech and disinformation on its platforms. 

The commission also asked the social media providers to furnish by Nov. 8 information about their efforts to protect the integrity of elections and set the same deadline for TikTok to respond to additional questions about how it's complying with other elements of the DSA that address the protection of minors online. 

Based on these replies, the commission "will assess next steps," which it said could include formally opening enforcement proceedings under Article 66 of the DSA. The commission could also impose fines if it finds that the companies have offered "incorrect, incomplete or misleading information" in response to its request or hand down periodic payment penalties if the companies fail to reply by the deadline. 

The information requests come on the heels of the commission in April designating the first set of large digital platforms and search engines that would be subject to the heightened content moderation rules under the DSA. This list of 19 providers included Meta, TikTok, AmazonApple's App Store, Microsoft's LinkedIn, several services owned by Google and X, the social media site formerly known as Twitter.

The commission said that within four months of the designations, the companies would have to start complying with the new rules imposed by the Digital Services Act, which is intended to protect users online, especially minors, by requiring moves to mitigate risk and tools to moderate content.

Under the DSA, which European officials passed in April 2022, designated companies are required to provide users with an easy way to report illegal content and to diligently enforce clear terms and conditions covering content moderation. They also have to implement measures to mitigate risks that are specific to the platforms, such as ways to prevent misinformation, according to the commission.

Online platforms that don't comply with the rules could face fines of up to 6% of their worldwide revenue. 

Thursday's request for information asks Meta and TikTok to detail the steps they've taken to comply with their obligations related to risk assessments and mitigation measures with respect to the dissemination and amplification of illegal content and disinformation as well as the protection of the integrity of elections. 

TikTok has additionally been directed to provide information about how it's following heightened rules to ensure the protection of minors online. 

A TikTok spokesperson confirmed that the company has received the European Commission's request for information and is currently reviewing it. 

"We'll publish our first transparency report under the DSA next week, where we'll include more information about our ongoing work to keep our European community safe," the spokesperson said. 

A Meta spokesperson said that it also plans to respond to the European Commission and is "happy to provide further details" of its work in this space, "beyond what we have already shared."

Meta has stressed that it maintains "a well-established process for identifying and mitigating risks during a crisis while also protecting expression" and has noted that, after the terrorist attacks by Hamas on Israel, it "quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation."

"Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation," the spokesperson said in the company's latest statement.

Efforts to crack down on the spread of harmful online content have accelerated in the wake of the Palestinian militant group Hamas launching surprise attacks on Israel earlier this month. The attacks and their aftermath in the region, according to reports, have led to the rapid increase in online threats against Jewish and Muslim people and institutions.

Last week, New York Attorney General Letitia James fired off a series of letters calling on Meta, TikTok, X, Google, TikTok, Reddit and Rumble to provide detailed explanations of what they're doing to ensure that their services aren't being used to "incite violence and further terrorist activities," including by describing how they're "identifying, removing and blocking" antisemitic and Islamophobic content.  

"In the wake of Hamas' unspeakable atrocities, social media has been widely used by bad actors to spread horrific material, disseminate threats, and encourage violence," James said in an Oct. 14 statement. "These platforms have a responsibility to keep their users safe and prohibit the spread of violent rhetoric that puts vulnerable groups in danger. I am calling on these companies to explain how they are addressing threats and how they will ensure that no online platform is used to further terrorist activities."

--Editing by Michael Watanabe.

Update: This article has been updated to add statements from TikTok and Meta. 

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!