Governments around the world are trying to tighten their scrutiny of social media recommender systems, the algorithms that determine what users see on platforms such as Instagram and TikTok. After years of debate around the risks of harmful content and the need to moderate it, regulators are now shifting to design choices, particularly engagement-driven algorithms, as the next frontier of online safety policy.
This article is part of an MLex online safety special series running this week. Other stories focus on US-specific regulation, age verification and video gaming (see here). The UK and EU online safety regimes both place recommender systems at the center of new obligations, and similar conversations are gaining traction elsewhere in the world.
Other jurisdictions to watch include Brazil, due to its recently approved online safety law, soon to be implemented (see here), and Singapore, where a new online safety law establishes a commission empowered to investigate social media reports, issue directions and order remedial measures such as content removal or account restrictions (see here).
But nagging questions remain: Is it possible to regulate algorithms effectively? Can doing so actually change users’ behavior and experiences? Finally, how might the results of regulatory interventions on platforms’ algorithms be measured to ensure that such policies are working?
— The UK test —
The UK is an important battleground for this debate as the country moves forward with the implementation of its Online Safety Act.
Interpreting the still-developing law, Ofcom has characterized recommender systems as “children’s main pathway to harm online,” arguing that the way content is ranked and promoted can create risks in itself.
That’s why the regulator has placed algorithms at the center of its “protection of children” codes of practice, published in April (see here). These rules, enforceable since July, place duties on platforms to go beyond content flagged as harmful by content moderation systems.
They must also take a more “precautionary approach” and consider changing the algorithmic recommendation of content that displays other signs of being harmful to children, Ofcom has said.
“Platforms must design their content classifiers in a way that actually takes the wider signals into consideration to indicate that content might be harmful to children and should therefore never be recommended by the algorithm,” Almudena Lara, Ofcom’s online safety policy director, told MLex when the codes were unveiled in April.
The regulator’s transparency powers already allow it to require companies to explain “the design and operation of algorithms which affect the display, promotion, restriction or recommendation of content.”
That shift from regulating how platforms moderate content to auditing how their systems feed that content to the users instead underpins a broader policy trend in Europe.
In the EU, recent Digital Services Act guidelines for big platforms, published earlier this year, mirror Ofcom’s thinking by identifying recommender systems as a core risk area and demanding more transparency in how algorithms amplify content (see here and here).
The guidelines say that platforms should implement measures preventing algorithms from recommending harmful or risky content to children, including material that promotes unrealistic beauty standards, dieting, mental-health harms, discrimination, radicalization, violence or dangerous activities.
They also state that platforms should allow children to fully and permanently reset their recommended feeds to protect their privacy, safety and security.
The most concrete illustration of this trend so far came from a Dutch court in October, when judges ruled that Meta Platforms must let users more easily select and keep an algorithm-free feed on Facebook and Instagram (see here). The court found that the design, which defaults back to a personalized feed after users opt out, breached the DSA and amounted to a prohibited “dark pattern.”
Meta will appeal, arguing that such national rulings could fragment enforcement of the EU’s single rulebook. But this case sets an important precedent and could offer some guidance to child safety campaigners fighting for safer algorithms.
At an EU level, Meta’s Facebook and Instagram, as well as TikTok, are currently being probed by the European Commission, which polices the Digital Services Act, over concerns that their algorithms may stimulate behavioral addictions in children and create “rabbit-hole” effects, where users become fixated on digging deeper into a topic (see here and here).
— How algorithms work —
Recommender systems rank and prioritize material according to metrics such as watch time, reactions and scrolling speed. Their goal is usually to keep users engaged, regardless of the content they promote.
That business model, which relies on engagement and time spent on social media, is not currently being challenged by regulators. But in the UK and the EU, watchdogs seem to agree that relying too heavily on engagement could pose a systemic risk to children.
Ofcom’s codes and the DSA’s risk assessment obligations are both intended to force companies to consider how their algorithms amplify particular categories of content.
Brazil is following a similar trajectory, though its politics differ. The government has stopped short of implementing a wide digital services bill, but passed a children’s online safety law that introduces some design duties for platforms (see here).
Officials say the goal is to prevent commercial incentives from becoming safety liabilities. “The real concern is that there are business models whose functioning creates room for platforms to become, in some cases, partners in crime,” Lílian Cintra de Melo, head of the Justice Ministry’s digital rights secretariat, told MLex in October.
Public debate has focused on potential links between algorithms and deteriorating mental health among minors. But this link hasn’t been fully established by scientists so far.
A European Commission policy brief released this year noted “challenges for establishing causality” between social media use and poor mental health outcomes and called for more longitudinal studies. In the UK, Ofcom’s policy explainers describe recommender systems as exposure routes rather than direct causes of harm.
But even if the causal link isn’t consensual, experts have been urging governments and regulators to pay more attention to algorithms in debates around child protection.
Brazilian researcher Beatriz Kira, who works in the UK and publicly advised British lawmakers on the Online Safety Act’s implementation earlier this year, but has also worked on the development of child safety policies in Brazil, argued that many proposals around the world understate how recommendation engines work.
“A recommender system is a content-agnostic engagement engine,” Kira told a parliamentary committee earlier this year. Ofcom’s codes focused too much on illegal content and failed to address the “cumulatively harmful” impact of recommender systems, she argued.
Even if Ofcom’s powers to regulate algorithms are limited by the Online Safety Act’s scope, she said, the regulator could still do a better job of using its children’s protection powers to gain insight into Big Tech’s algorithms and finally tackle legal but harmful content.
— US lawsuits and the FTC’s limits —
In the United States, the debate is unfolding through litigation. Two large multi-district litigations accuse Meta, along with TikTok owner ByteDance, Google and Snap of designing products that are “addictive” for children and harmful to their wellbeing (see here). A series of state and federal trials is set to kick off next year.
The claims run up into the “high tens of billions of dollars,” Meta has said, warning investors that the proceedings “may ultimately result in a material loss” (see here). Many of the plaintiffs’ claims were narrowed, however, because US law limits the liability of interactive online platforms for content posted by their users.
The first trials involving individual plaintiffs are expected to begin as early as January before Judge Carolyn B. Kuhl in California’s Los Angeles County Superior Court.
Meanwhile, US District Judge Yvonne Gonzalez Rogers will oversee parallel federal trials involving both school districts and individual plaintiffs, scheduled for June.
Without a dedicated federal framework, the US continues to address algorithmic risks through court judgments and settlements rather than the systemic audits seen in the UK and EU and expected soon in Brazil.
— What next? —
Globally, the increased scrutiny of algorithms could mark a point where data protection, child safety and corporate governance obligations begin to overlap.
In a report about data protection trends for 2026, prominent law firm Freshfields recently described children’s data and online experiences as a “global compliance frontier,” warning that companies must rethink how their systems collect and process children’s information across different jurisdictions.
The message for businesses is that algorithmic design will be the focus for regulators, but for watchdogs, the challenge will be to ensure their work leads to measurable change.
Across Europe, the increased political attention to algorithms is visible: lawmakers increasingly point to the tools that platforms have to protect their own interests as evidence that stronger safeguards are technically feasible.
The argument of child safety advocates is that, if platforms can detect copyrighted material within seconds, they can identify and demote harmful material in their algorithms just as easily.
“When you go home tonight, I want you to record a video of yourself and put five seconds of Taylor Swift’s latest song in the background and try uploading it to YouTube and see how fast their algorithms can identify her song and shove it straight down,” Imran Ahmed, founder and chief executive of the Center for Countering Digital Hate, told UK lawmakers in October.
“They have been doing it for a decade when it comes to copyrighted material, and yet when it comes to content that might make our kids cut themselves, or might lead to a terrorist attack, they somehow cannot do it,” he added.
The political expectation is clear. But whether the assumption that regulating algorithms could radically transform the experience of teenagers online holds true will define the next phase of the debate.
Please email editors@mlex.com to contact the editorial staff regarding this story, or to submit the names of lawyers and advisers.