Section 230 of the Communications Decency Act is facing legal scrutiny, leading many to wonder whether the internet, as we know it, is over. Section 230 absolves websites of liability based on posts from users of that website, meaning if Wanda posts something inflammatory on Facebook, Facebook is not liable for the content that Wanda posted (though Wanda might be). These protections have come into question as internet companies have gained more power in recent years. This provision has allowed companies freedom to grow and expand, but perhaps at a cost to the users of these platforms. Is that cost too great? And would repealing Section 230 challenge free speech?
Plaintiffs in Gonzalez v. Google, a case brought to the Supreme Court last month, allege that companies that serve consumers content based on preference algorithms should be liable for the content they promote. In this case, the family of Nohemi Gonzalez, a US citizen killed by a 2015 terrorist attack in Paris, France, are suing Google, which shares the same parent company as YouTube, for recommending videos depicting terrorist activities or terrorist recruitment videos. They argue that though YouTube is not liable for hosting content under Section 230, the company is responsible for the algorithms that offer this kind of content to viewers who may take inspiration from it.
Additionally, a bipartisan group of lawmakers are introducing a bill that would limit Section 230: Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH). Sen. Mark Warner (D-VA) believes that regardless of what the Supreme Court decides, “it’s clear that we need to act to rein in these companies that have used Section 230 as a shield for far too long.” SAFE TECH would affect advertisements, remove any protection preventing enforcement of civil rights laws and remove any protection against wrongful death actions. Currently, the bill has only been introduced — for the second time, as it was first proposed two years ago —and would not fully repeal Section 230.
Advocates for free speech worry that scrutiny over Section 230 would inhibit our rights. Under Section 230, individuals have always been liable for their own speech. Companies like Twitter, TikTok and even MySpace engage in content moderation that limits what users can and cannot post. New legislation would place a greater burden on companies to stop promoting content that is dangerous, even if it generates clicks and ad revenue. SAFE TECH will likely lead to tighter and more restrictive content moderation. There are greater implications: political organizers and sex workers may have a harder time going about their business than they already do.
SAFE TECH will not affect antitrust laws and will not break up big tech companies like Alphabet or Meta, which own platforms like Google and Instagram respectively. The act is meant to target “dissemination of material that is likely to cause irreparable harm” and will make companies take on the role of publisher, rather than their current legal role as passive hosts of content.
We don’t know what the world will look like in a post Section 230 world, but, luckily, current legislation will amend Section 230, not replace it. There are greater battles to be fought over freedom of speech in the United States.