Perspectives

Q&A: Social Media Regulation and the Perils of Section 230 Reform

Legal expert Daphne Keller argues that any legislative reform of platform liability could open a Pandora’s box.

In the interview below, Freedom House’s Allie Funk asks Daphne Keller—director of the Program on Platform Regulation at Stanford Law School—about free expression online, social media regulation, and the law commonly known as Section 230 of the US Communications Decency Act. While Section 230 has come under increasing political pressure in recent years, it remains the world’s strongest legal protection for free speech online and has long been synonymous with the US-backed model of internet freedom.

Funk: Set the stage for our readers. What drove US lawmakers Chris Cox and Ron Wyden to write Section 230?

Keller: Section 230 was enacted in the context of a bigger piece of legislation called the Communications Decency Act, which had much more pro-censorship goals of restricting indecent speech on the internet, particularly legal pornography. Cox and Wyden proposed Section 230 to try to, in part, have a less censorious approach to online speech. They wanted to create a situation in which platforms don’t face liability for content posted by their users, in order to avoid a situation where platforms are motivated to remove tons of content out of an abundance of caution and silence controversial speech. Cox and Wyden also wanted to find a way to reduce the prevalence of porn or indecent or offensive content on the internet, but they wanted to give individual platforms the leeway to set their own standards for what they were going to accept.

So Section 230 creates two immunities: One is that the platform is not liable for most categories of speech posted by users. The second is that the platform is not liable if it creates and enforces content policies that remove speech it considers offensive or irrelevant, for example if a platform takes down cat pictures because they’re a dog-only site.

It’s been a quarter-century since Section 230 went into effect in 1996. How has it worked in practice? Which categories of speech can platforms still be held liable for?

Section 230 creates immunity for a lot of the kinds of claims that are brought up in civil litigation under state law—for example, defamation or content that invades somebody’s privacy. But it does not create immunity for a couple of big categories of potential claims. There is no immunity for federal crimes such as child sexual abuse material (CSAM) or material in support of terrorism. Platforms face the same legal responsibility and risk around this content as anybody else. Another big carve-out is for intellectual property; in particular, copyright is governed by the Digital Millennium Copyright Act, which creates a kind of choreographed notice-and-takedown system. And there are a couple of other carve-outs that are not as big, but probably the most important one to know about is SESTA/FOSTA, a package of legislation that Congress enacted in 2018 and relates to sex trafficking.

Is there a certain legal threshold that the government has to reach in order to hold a content host liable for federal crimes?

It depends on the law, but for example for CSAM, to hold platforms liable, the government would have to prove that they did not take action when they knew federally illegal content was on their sites. The law doesn’t create an obligation for platforms to go out and proactively monitor and police user speech in search of such illegal activities. This is important, because there is a real problem globally with laws that do incentivize platforms to proactively monitor user posts, which can lead them to remove content excessively and also invade users’ privacy.

Most platforms voluntarily do try to proactively monitor for CSAM. They have databases of hashes—basically digital fingerprints of known images or videos—and have tools to automatically filter for and remove the content. And then they are required by law to report this to the National Center for Missing and Exploited Children (NCMEC). I’m confident that most prosecutors wouldn’t consider normal mainstream platforms liable in this fairly typical situation, where the big picture is that they are trying to find illegal content, take it down, and report it. I don’t know of serious claims that major consumer platforms know about and tolerate CSAM on their sites. Instead, what you hear when you talk to people who work in content moderation, which is a fascinating and emotionally taxing job, is that they repeatedly have the experience of notifying NCMEC or law enforcement about specific instances of actual abusers posting this content, and then there is little if any follow-up or criminal prosecution. I think there is a big missing piece on the actual prosecution of these people, independent of whether you also think there are additional things platforms should be doing.

We’ve seen several different attempts to reform Section 230 in recent years. Do you think the conversation is going to change under the Biden administration versus what we saw under the Trump administration? What’s the likelihood that actual reform will be implemented?

I think that we are very likely to see additional changes to Section 230. Last year alone, there were at least 20 bills introduced to change the provision. And I think that interest isn’t going away. Very broadly speaking, there is a conflicting set of goals between what a lot of Democrats want—for example, calling for platforms to remove offensive or harmful content such as COVID-19 misinformation or hate speech that is protected under the First Amendment—and what a lot of Republicans want—for example, claiming that platforms enforce biased policies against conservative speech—out of Section 230 reform.

But generally, I think Democrats and Republicans could find common ground for reform that tackles really awful content such as CSAM. There may also be bipartisan support for reform that focuses on procedural fairness around content moderation and requires more transparency around terms of service and avenues of appeal for people whose speech has been taken down.

That reminds me of some legal proposals in the European Union. The Digital Services Act (DSA), for example, would introduce transparency mandates for different platforms and would cement certain types of due process rights for users. What are your thoughts on the DSA? Are there any unintended consequences around transparency requirements for content moderation?

I think the DSA will pass in some form, almost definitely. Broadly, I think it is taking the right approach of establishing better processes for content removals. However, I don’t agree with some other big-picture things about it, such as exactly which speech is made illegal under European law. I also think it makes tradeoffs between competition goals and speech-regulation goals that I would not make. For example, it imposes content-moderation obligations on platforms that require a lot of investment to comply with. This would benefit the bigger players that have vast resources they can use for compliance, but would burden smaller or newer companies. There’s this tradeoff between trying to get platforms to do a better job of moderating content on the one hand and enabling competitors to come along and challenge today’s mega incumbents on the other hand. I hope that lawmakers are very conscious about that and don’t ignore one goal while they’re pursuing the other one.

I think privacy and competition laws are stronger tools we could be using. If we had 20 search engines or 20 robust social media sites, then no one of them would have gatekeeper power, and we would have a greater diversity of speech rules. A lot of these problems get alleviated if there are more platforms or if you as part of your privacy rights can say, “Hey, don’t use my data to target me with this kind of content or that kind of content.”

So, do you think Section 230 needs to be reformed?

If I could just change it, sure. I would make some little tweaks, such as the inclusion of some role for the courts in telling platforms “this content is illegal, take it down.” But it is Congress that has the power. When Congress gets its hands on Section 230 reform, there are kind of unpredictable and not necessarily good things that tend to happen. So, all told, I would rather just leave it alone.

In some cases, I think there is momentum to make changes that I’m not quite sure are actually necessary. A big one has to do with targeted advertising being done in a way that is specifically illegal, so for example excluding people from housing, credit, or employment ads based on their race, gender, age, or sexual orientation. Facebook has been sued a couple of times for this, and they have ended up settling the cases, but they’ve also claimed that Section 230 immunizes them from liability. And nobody knows whether 230 would actually immunize them for this. The problem here isn’t that the ads are illegal, but that the targeting was done in a way that is discriminatory. And so, I think there very likely shouldn’t be 230 immunity there anyway. But one proposed bill in Congress is trying to take away 230 protections around this situation, and around some other civil rights claims. This is where I would want to just litigate the issue to first determine if it is actually a Section 230 problem before opening the Pandora’s box of introducing legislation to target it. Generally, I think this is one of many things that gets blamed on Section 230 that is probably not the law’s fault.

Since former President Trump had his accounts removed from Twitter and Facebook in January, we’ve seen new draft platform regulations or announcements of planned proposals in Brazil, Mexico, Hungary, India, Poland, and other countries. Do you think that 2021 is going to be the year of regulation, or is this something we’ve always been heading toward?

In my world, it’s been the year of platform regulation every year. But I do think there has been a shift just in the past year or two. While it preceded the Trump de-platforming, that development prompted a huge wave of people across the political spectrum to question whether a private company should have the power to silence a democratically elected leader and why there are no laws around these issues.

I think we’ll see a lot more of what I generically call “must-carry” laws, which compel platforms to carry content they don’t want to. In the US, it would be very hard to do this, partly because of the platforms’ own First Amendment rights to set editorial policy. And they have successfully and would, I think, continue to successfully go to court to argue that their speech rights under the First Amendment mean the government can’t compel them to carry speech they do not want to carry. But in most other parts of the world, the legal framing is different. First, corporations don’t necessarily have big robust speech rights. Second, even if they did, courts can override that for some other societal purpose, including protecting the speech rights of internet users. And third, some other countries have a much stronger concept of the horizontal application of human rights, meaning individuals have rights in relation to the government but also ones that can be enforced against powerful societal actors like private companies.

So, I think we will see much more movement in that direction. What would platforms look like if they had to carry every single nasty thing that’s technically legal? I think they’d be kind of cesspools of bullying, hatred, and barely legal threats. I don’t think they’d be something that most people want to interact with or most advertisers want to spend money to run their ads on.

This interview has been lightly edited for clarity and concision.