What is Section 230, the law that inspired Trump’s military veto threat?

Laveta Brigham

Seemingly everyone in Washington, DC wants to scrap Section 230, a snippet of the 1996 Communications Decency Act originally designed to help websites moderate online porn. On Dec. 2, US president Donald Trump threatened to veto a normally bipartisan military funding bill unless Congress “completely terminated” Section 230. That stance […]

Seemingly everyone in Washington, DC wants to scrap Section 230, a snippet of the 1996 Communications Decency Act originally designed to help websites moderate online porn.

On Dec. 2, US president Donald Trump threatened to veto a normally bipartisan military funding bill unless Congress “completely terminated” Section 230. That stance is a rare point of agreement with his successor, president-elect Joe Biden, who told the New York Times that “Section 230 should be revoked, immediately.” In November, senators from both parties called for changes to the law while grilling the CEOs of Twitter and Facebook in a public hearing.

So what is everyone so riled up about?

The short answer: Public rage over social media companies’ bungling attempts at content moderation is finally boiling over, and politicians of every stripe have jumped at the chance to bash Big Tech and curry favor with their constituents. But a slightly longer answer will take us back to the 1990s to revisit Americans’ anxieties about sex, go to court with the scammy investment firm that inspired The Wolf of Wall Street, and wade through a bit of legalese.

How does Section 230 work?

Section 230 guarantees that websites like Facebook, Twitter, and YouTube can’t be sued in American courts because of the false, filthy, or downright illegal things their users post every day. Without the law’s liability protections, all of these US-based platforms would be exposed to immense legal risk that would make hosting user content extremely expensive.

The bulk of Section 230’s impact comes from a single sentence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That’s a big deal, because under US law, you can sue the publisher of certain kinds of content, but you can’t sue a distributor. Imagine a book contains defamatory lies about you. You can take the publishing house that printed the book to court, but you can’t sue every bookstore that sold it. Section 230 says that when a user posts illicit content on a website like Facebook, the website falls into the distributor category, like a bookstore.

Other countries generally have stricter liability laws that make it easier to sue web companies. But even if you win a lawsuit against, say, Facebook in a German court, US law makes it impossible to recover any of the company’s American assets unless your lawsuit is compatible with Section 230.

What is the purpose of the Communications Decency Act?

The point of the Communications Decency Act was, in part, to tamp down internet porn. Section 230 was a small piece of the law designed to give internet companies a legal shield that would empower them to moderate objectionable posts from their users without fear of lawsuits. Before the law went into effect, that kind of moderation could land them in court.

In the mid-90s heyday of Stratton Oakmont, the Wolf of Wall Street investment firm, an anonymous user took to an online message board to accuse the firm of fraud. Stratton Oakmont sued Prodigy Services, the company that hosted the message board, for defamation. And in 1995, the New York Supreme Court sided with brokerage, ruling that because Prodigy employed a team of moderators to govern the message board, it was making editorial decisions about what to publish and should therefore be vulnerable to lawsuits just like any other publisher.

That logic created a perverse incentive: From a legal standpoint, it was less risky for a website to let users post any nasty thing they wanted, because once you started moderating content you became a publisher and could get sued. The following year, congressmen Ron Wyden and Chris Cox championed Section 230 to solve the web’s moderation problem. The idea, Wyden later said, was to give internet companies “a sword and shield”: The defense against lawsuits would free websites up to go on offense and aggressively moderate objectionable content.

At the time, legislators were mostly thinking about porn. A former IBM lobbyist who worked to get Section 230 passed said most members of Congress didn’t understand computers, but liked the idea of being able to say “I followed IBM’s advice to stop pornography on the internet.” But over the years, Section 230 has protected websites’ ability to host and moderate all kinds of content, from unflattering restaurant reviews to election disinformation to terrorist propaganda.

What are we going to do about Section 230?

Lawmakers from both US political parties are eager to curtail Section 230 protections, but for different reasons. Republicans want to repeal the liability shield to force websites to end what they claim (without evidence) is a campaign to censor conservative views, and Democrats want to roll back Section 230 to hold websites responsible for the disinformation that spreads on their platforms.

The biggest blow to the law so far was the bipartisan 2018 Fight Online Sex Trafficking Act (FOSTA), which says Section 230 protections don’t apply when a website gets sued for hosting content related to sex trafficking. Meanwhile, court cases have slowly but steadily exposed chinks in the 230 armor, starting with a 2008 ruling which found that Roommates.com was not exempt from a housing discrimination lawsuit after it required users to fill out a questionnaire asking their race and what race they’d like their roommate to be.

This year, members of Congress have proposed five different bills to roll 230 protections back even further—which collectively adopt two basic approaches, as Casey Newton identified in The Verge.

First, there’s the “carveout” approach, in which legislators make broad categories of content exempt from liability protections; FOSTA’s exception for sex trafficking content is the blueprint for this. Then there’s the “bargaining chip” approach, which makes 230 protections contingent on complying with a government demand. For example, under the proposed Ending Support for Internet Censorship Act, companies would only be protected from lawsuits if they could convince the Federal Trade Commission they’re politically neutral.

Curiously, some Big Tech companies have come around to support efforts to weaken Section 230. Facebook and Google, for example, were early supporters of the bill that eventually became FOSTA, and Facebook CEO Mark Zuckerberg has called for more reform.

These small concessions could head off more onerous regulation down the road. But the more cynical read is that rolling back 230 would give the biggest Big Tech companies an advantage over smaller competitors who lack the resources to navigate the legal morass that would inevitably follow.

Sign up for the Quartz Daily Brief, our free daily newsletter with the world’s most important and interesting news.

More stories from Quartz:

Source Article

Next Post

As Online Puppy Scams Peak, Adopt A Pet At A Shelter Instead

ACROSS AMERICA — Online scammers have been busy as the holiday season has collided with the COVID-19 pandemic. One type of scam that is particularly alarming, and gaining strength, involves puppies and other pets such as kittens, iguanas and lizards. A spike in the number of reported pet scams has […]