The 1996 law passed while the internet was still embryonic and downright terrifying to some lawmakers for what it could unleash.
Almost any article you read about Section 230 reminds you that it contains the most important 26 words in tech and that it is the law that made the modern internet. This is all true, but Section 230 is also the most significant obstacle to stopping misinformation online.
Section 230 is part of the Communications Decency Act, a 1996 law passed while the internet was still embryonic and downright terrifying to some lawmakers for what it could unleash, particularly with regard to pornography.
Section 230 states that internet platforms — dubbed “interactive computer services” in the statute — cannot be treated as publishers or speakers of content provided by their users. This means that just about anything a user posts on a platform’s website will not create legal liability for the platform, even if the post is defamatory, dangerous, abhorrent or otherwise unlawful. This includes encouraging terrorism, promoting dangerous medical misinformation and engaging in revenge porn.
Platforms, including today’s social media giants Facebook, Twitter and Google, therefore have complete control over what information Americans see.
How Section 230 came to be
The Communications Decency Act was the brainchild of Sen. James Exon, Democrat of Nebraska, who wanted to remove and prevent “filth” on the internet. Because of its overreaching nature, much of the law was struck down on First Amendment grounds shortly after the act’s passage. Ironically, what remains is the provision that allowed filth and other truly damaging content to metastasize on the internet.
Section 230’s inclusion in the CDA was a last-ditch effort by then Rep. Ron Wyden, Democrat of Oregon, and Rep. Chris Cox, Republican of California, to save the nascent internet and its economic potential. They were deeply concerned by a 1995 case that found Prodigy, an online bulletin board operator, liable for a defamatory post by one of its users because Prodigy lightly moderated user content. Wyden and Cox wanted to preempt the court’s decision with Section 230. Without it, platforms would face a Hobson’s choice: If they did anything to moderate user content, they would be held liable for that content, and if they did nothing, who knew what unchecked horrors would be released.
What lies ahead for social media reform
When Section 230 was enacted, less than 8% of Americans had access to the internet, and those who did went online for an average of just 30 minutes a month. The law’s anachronistic nature and brevity left it wide open for interpretation. Case by case, courts have used its words to give platforms broad rather than narrow immunity.
As a result, Section 230 is disliked on both sides of the aisle. Democrats argue that Section 230 allows platforms to get away with too much, particularly with regard to misinformation that threatens public health and democracy. Republicans, by contrast, argue that platforms censor user content to Republicans’ political disadvantage. Former President Trump even attempted to pressure Congress into repealing Section 230 completely by threatening to veto the unrelated annual defense spending bill.
As criticisms of Section 230 and technology platforms mount, it is possible Congress could reform Section 230 in the near future. Already, Democrats and Republicans have proposed over 20 reforms – from piecemeal changes to complete repeal. However, free speech and innovation advocates are worried that any of the proposed changes could be harmful.
Facebook has suggested changes, and Google similarly advocates for some Section 230 reform. It remains to be seen how much influence the tech giants will be able to exert on the reform process. It also remains to be seen what if any reform can emerge from a sharply divided Congress.
Abbey Stemler is an associate professor of business law and ethics and a faculty associate Berkman Klein Center for Internet and Society at Harvard University at Indiana University.