After over four hours of testimony, a compromise on legislation that addresses content moderation policies on Facebook, Twitter and other social media companies remained elusive, despite the committee's scheduled vote Thursday on proposed changes to the liability protections that shield online platforms, also known as Section 230.
While Democratic and Republican senators suggested there might be room to modify the so-called Big Tech immunity shield so that platforms don't wield too much power over online discourse, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey told the lawmakers they must first identify the main content moderation practices they want to address.
Facebook CEO Mark Zuckerberg said his platform operates on the premise people should be allowed to share freely unless their posts will cause imminent harm to another person. He noted that even the most ardent First Amendment proponents agree people shouldn't be able to shout "Fire!" in a crowded theater because it could endanger bystanders.
This year, Facebook has had to assess how incorrect information posted on the platform about the coronavirus and the 2020 presidential election could cause imminent harm. Beyond clearly illegal or exploitive speech, "a lot of the debate is about, what are other forms of harm?" Zuckerberg said.
When Committee Chairman Lindsey Graham, R-S.C., asked Twitter CEO Jack Dorsey whether the best course would be to let industry come up with a set of best business practices, he responded that, first, "we need alignment around the problem we're trying to solve."
In general, Democrats have called on social media companies to do more to curb misinformation, while Republicans, who have repeatedly asserted that social media platforms are often prone to flag conservative posts as misinformation, have urged them to be judicious in choosing which posts to modify.
The vehicle for either move, identified by both parties, is Section 230 of the Communications Decency Act, which shields internet platforms from liability for content posted on their sites as long as they employ reasonable moderation practices. Lawmakers are considering removing the liability shield for platforms whose moderation practices are either too lenient or are considered politically biased.
The ongoing rift between how Democrats and Republicans approach online content moderation problems was on full display Tuesday — the second time Dorsey and Zuckerberg have testified in a month — when Sen. Dianne Feinstein pressed Dorsey on whether his platform's labeling of potential election misinformation was robust enough.
The California Democrat expressed concerns that the platform left up tweets by President Donald Trump that falsely claimed the election was marred by voter fraud and that simply labeling the claims as disputed didn't go far enough in preventing the spread of misinformation.
"Do you believe this label does enough to prevent the tweet's harms when the tweet is still visible? It's a highly emotional situation, and the tweet has no factual basis," Feinstein said.
From the other side of the aisle, Sen. John Kennedy, R-La., pressed Zuckerberg on why Facebook makes overreaching content-moderation decisions when it could instead opt to trust its users' own judgment. He noted that if web platforms overextend their moderation activities by curating and modifying content, they are not eligible to claim protections under Section 230, which only applies to websites hosting user-posted content.
"I'm not saying you're wrong for doing what you just described, but that makes you a publisher and it creates problems under Section 230," Kennedy said of Facebook's moderation policies. "At some point, we have to trust people to use their own good judgment."
Instead of crafting new legislation — or even creating a new regulatory body to oversee online content moderation disputes — Sen. Ben Sasse, R-Neb., suggested it might be better to home in on platforms' three- to five-year development plans, which currently call for more moderation transparency and user choice in algorithms that determine what content they see.
"I'm not really on the side of thinking there's an easy governmental fix here," Sasse said.
Zuckerberg and Dorsey laid out different yet potentially complementary visions for how platforms could become more transparent and give users more control over how their feeds are managed.
Dorsey suggested implementing a "much more straightforward appeals [process]" for censored content and a feature that would allow users to choose which algorithms sort and rank the content they see.
Zuckerberg said his platform is focused on increasing "transparency both in the process and in the results" of content moderation decisions, from taking down networks facilitating illegal activity to preventing the spread of misinformation.
As for next steps, the Judiciary Committee is slated to vote on Graham's bill, the Online Content Policy Modernization Act, on Thursday. If passed, the bill would more narrowly define what kinds of "otherwise objectionable" content can be blocked and still receive protection under Section 230. Graham has also co-sponsored the Online Freedom and Viewpoint Diversity Act, which would amend Section 230 to only protect companies whose content moderation practices meet a standard of "objective reasonableness."
--Editing by Philip Shea.
For a reprint of this article, please contact reprints@law360.com.