If the Supreme Court undermines Section 230, marginalized people will pay the price
Section 230 is a widely misunderstood but foundational law for human rights and free expression online. Especially in the wake of the Dobbs decision, weakening it would be a disaster.
The Supreme Court of the United States has said it will hear two cases related to Section 230 of the Communications Decency Act. Any decision they make will have profound implications for the future of online speech and human rights.
Digital rights group Fight for the Future has long warned lawmakers about the potentially disastrous effects of amending Section 230. The group issued the following statement, which can be attributed to its director, Evan Greer (she/her):
Section 230 is a foundational and widely misunderstood law that protects human rights and free expression online. At a time when civil rights and civil liberties are under unprecedented attack, weakening Section 230 would be catastrophic—disproportionately silencing and endangering marginalized communities including LGBTQ+ people, Black and brown folks, sex workers, journalists, and human rights activists around the world.
The Supreme Court’s decision to overturn Roe v. Wade and strip millions of Americans of their bodily autonomy makes the prospect of Section 230 being weakened even more nightmarish. As we explained in Wired, meddling with Section 230’s protections in the wake of the Dobbs decision will lead to the widespread removal of online speech related to abortion, including information about abortion access and organizing and fundraising efforts. Far-right organizations like the National Right to Life Committee have drafted legislation that criminalizes not only providing an abortion but hosting abortion speech online. The legal immunity provided by Section 230 is the only thing preventing far-right groups and the attorneys general of states like Texas and Mississippi from effectively writing the speech rules for the entire Internet.
Some on the left misguidedly believe that attacking Section 230 is the only way to hold Big Tech accountable for the harm caused by its surveillance capitalist business model and its algorithmic recommendations that are maximized for engagement. But that’s simply not true. Weakening Section 230’s protections would make it harder, not easier, for platforms to remove harmful and hateful content because once they begin moderating, pre-Section 230 law says they become liable for any content they do not remove that causes harm. Additionally, by increasing the risk of litigation for small- and medium-sized platforms, altering Section 230 would solidify the monopoly power of the largest companies like Facebook and Google.
Conservatives and Republicans have claimed that Section 230 has been weaponized to “censor” right wing viewpoints on social media. There is no evidence for this. In fact, studies show that people of color and LGBTQ+ people are among the groups most regularly deplatformed and over-moderated on major tech platforms. In any event, weakening Section 230 protections wouldn’t prevent social media companies from removing posts based on political views, just like it wouldn’t incentivize platforms to moderate more thoughtfully, transparently, or responsibly. It would only incentivize them to moderate in whatever manner their lawyers tell them will avoid lawsuits, even if that means trampling on marginalized people’s ability to express themselves online.
The two cases the Supreme Court has agreed to hear both deal with horrific crimes related to terrorism. One case deals specifically with liability around online recommendation algorithms, like those used by YouTube. We’ve written before about how attempting to regulate algorithmic recommendations by changing Section 230 is a dangerous idea. Most legislative attempts to do this will run smack into the First Amendment, which protects platforms’ ability to make editorial decisions.
The Supreme Court should leave Section 230 alone. So should Congress. Instead, lawmakers should focus their efforts on enacting privacy legislation strong enough to effectively end the surveillance-driven business model of harmful tech giants like Facebook. The best way to address the harms of algorithmic manipulation without making matters worse is to regulate surveillance, not speech. The Biden administration’s FTC should also do everything in its power to crack down on corporate data harvesting and use of personal data to power harmful and discriminatory algorithms.
We can hold Big Tech giants accountable while protecting free expression and human rights. The Court’s preoccupation with Section 230 should be lambasted for endangering marginalized people and free speech, especially after the calamitous overturning of Roe.