Section 230 Is Broken: Big Tech Gets To Behave Like A Publisher While Enjoying The Immunity Of A Common Carrier

Social media is one of the greatest and most terrible forces in our society.

It has enabled mass communication across the planet on a level and scale that would’ve been unimaginable mere generations ago. It allows people to connect with distant friends and family, meet and interact with human beings on the other side of the planet in a matter of seconds, and has circumvented traditional media gatekeepers to allow every citizen to speak their mind to an audience of millions.

But along with the blessings it offers come a host of horrors: it’s almost certainly responsible for an alarming spike in rates of self-harm, depression and suicide in adolescent girls, is a major vector for spread of the social contagion of gender theory, and its dark recesses are home to a network of child predators that have resisted all efforts to expunge them.

Every one of these problems is unbelievably bad, and we haven’t even touched on issues like cancel culture, the incentivization of outrage, the Orwellian invasions of privacy or the fact that the most popular app in the world is essentially Chinese spyware

The most damning, the most grotesque perhaps of all the warts on this wondrous technology is the contempt of its creators for its cardinal virtue: the free flow of ideas and information.

Recent revelations have confirmed what many of us have known for years — social media giants have systematically squashed dissenting voices to further an ideological agenda, even in the absence of violations of their own stated rules. Worse, they’ve done so several times at the behest of high-ranking government officials in a gross violation of first amendment principles. 

Instead of ushering in an era of open expression and inquiry, and providing a check on power, most social media companies seem bent on shutting us down and propping the political establishment up.

This isn’t entirely unprecedented — most of the legacy media functions as an unpaid campaign arm of the Democratic Party — but it is more insidious. Social media platforms are the modern public square… for better or worse. They are where people congregate to discuss the issues of the day and elections are won and lost on them.

They’re also meant to be platforms — channels of communication, open to all, and not publishers who control what is said by whom. The New York Times might refuse to run your op-ed, but the phone company doesn’t cut your service because it doesn’t like who you’re talking to or what you have to say. But, as of now, social media gets to act like the Times while being legally treated as a common carrier.

The legislation that makes this so is Section 230 of the Communications Decency Act.

Section 230 is a basic yet vitally important law that acts as the foundation of our entire internet infrastructure. At its core, it says, 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

What does this mean in practical terms?

Suppose you’re an intrepid internet enthusiast in a world without Section 230, and you want to set up a forum where you and other like-minded people can gather to discuss why only the Original Star Wars movies are any good and why the Prequel and Sequel Trilogies qualify as crimes against humanity. 

While surveying your shiny new site, you notice that some moron has started a thread about how the Holocaust didn’t happen (but should’ve).

Not only is this untrue and grossly offensive to most people, but it also has nothing to do with the subject of the forum. Do you decide to delete the thread and suspend or even ban the offending user?

Congratulations! You are now a publisher, who is legally liable for every word on your website.

Content moderation is a full-time, soul-sucking job that can expose you to the darkest horrors found in the human heart, and paying someone to sift through every piece of user content submitted to your website can be expensive, though not quite as expensive as lawsuits or criminal prosecution — if some sicko uploads child pornography to an obscure corner of your website and you don’t notice it, that’s on you.

Managing the potential legal headaches can be a challenge even in small communities with a few hundred uploads per day. Facebook has almost 3 billion monthly users and receives 2.5 billion pieces of content per day. Even with a full-time, dedicated staff and algorithmic assistance, censoring every illegal or objectionable piece of content is a Sisyphean task — if you program a bot to auto-remove certain content, bad actors inevitably find workarounds.

Of course, you could remove nothing, and wash your hands of the matter. You may keep the servers running but the users can do anything they want, and even if it’s illegal it’s not your problem.

Alternatively, you could restrict user uploads severely and force everything single post through a strict screening process, but those restrictions mean for all intents and purposes you ARE a publisher and are no longer running a user-dominated site. Social media as we know it could not exist under this framework.

Legislators realized in the 1990s that the internet was a different media ecosystem than what came before and adjusted the rules appropriately.

The net may be a public square, but even with an absolutist view of free speech, some things are not allowed in the public square. Holocaust denial is still legally protected speech, but fornicating in public, recruiting militants for a terrorist attack, inciting a mob to violence, assaulting someone or committing slander and libel are not.

So, if the custodians of this new public square made a good-faith effort to remove as much illegal or objectionable content as they could, the law would not penalize them for the bits of content that made it past their filters.

To be specific, in Section 230 subsection 2, governing civil liability

“No provider or user of an interactive computer service shall be held liable on account of—

any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).”

This is, for the most part, fine. We all want as little child abuse and terror plots as possible, but we all want access to a more or less open net.

But the devil is in the details. What exactly constitutes ‘otherwise objectionable’

On a Star Wars forum talking about Star Trek may be considered objectionable, or vice versa. On a sci-fi forum, both may be fine but overly long political screeds may be out.

What about a general-purpose platform, like Facebook or Twitter? Videos of burning the flag, or promoting the supremacy of sharia law, or making any kind of statement on the issues of sex and gender will undoubtedly be objectionable to a great many people, even though all of these are unambiguously examples of constitutionally protected speech that could not be censored in a true public square.

But constitutional protections don’t matter, and can’t matter if subject-specific forums are to exist, because ‘off-topic’ and ‘illegal’ are radically different categories.

So, social media giants get to have their cake and eat it too. They are legally empowered to censor any user at any time for any reason, whether or not it’s in line with their published rules because it’s their website. If you don’t like it, go pound sand. 

Conversely, if anyone ever confronts them about all the illegal, horrific content that makes its way onto their site, they can rightly say “We’re doing the best we can to remove it, but we’re an open platform and can’t control everything our users post.”

Yeah, well, that hasn’t stopped them from trying.

As a consequence, a disproportionately progressive tech sector can and does stifle Right-leaning voices in any way it can — the artificial restriction of the Overton Window has gotten so bad that these platforms can even ban a sitting president of the United States for posts that, by their own estimate, did not violate their rules.

Our republic is predicated on the open exchange of ideas — our system of government cannot work if citizens are unable to assemble, air their grievances and share information. When the views of tens of millions of citizens are systematically suppressed to enable a disconnected political elite to manufacture the illusion of consensus, people lose faith in the system and at some point, it is no longer able to function.

Congress created Section 230 as a legal framework to make online interactions possible, but why should we continue to be complicit in the conditions of our own destruction?

Section 230 gives online media companies legal cover to censor — that’s what it was designed to do, but is that really a desirable outcome?

The statute could be modified in various ways, however, any approach has advantages and drawbacks — it’s a rare thing to find a clean solution to a complex problem.

This law is the keystone of our online infrastructure, which is on net balance a profound social good.

But something has to change because the current situation is unacceptable.

The views expressed in this piece are those of the author and do not necessarily represent those of The Daily Wire. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Generated by Feedzy