Discussion: (0 comments)
There are no comments available.
A public policy blog from AEI
More options: Share,
Section 230 of the 1996 Communications Decency Act states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” For more than 20 years, since the birth of the internet age as we know it, Section 230 has provided websites with immunity from liability for what their users post. Today, Section 230 is under fire from politicians on the left and the right who think those 26 words are insufficient in an age of Big Tech where we live much of our lives online. On this episode, cybersecurity professor Jeff Kosseff discusses his new book “The Twenty-Six Words that Created the Internet” about the past, present, and future of Section 230.
Jeff Kosseff is an assistant professor of cybersecurity law at the United States Naval Academy. Before becoming a lawyer, he was a technology and political journalist for The Oregonian and was a finalist for the Pulitzer Prize in national reporting. What follows is a lightly edited transcript of our conversation. You can download the episode by clicking the link above, and don’t forget to subscribe to my podcast on iTunes or Stitcher. Tell your friends, leave a review.
The Electronic Frontier Foundation describes the importance of Section 230 this way: “This legal and policy framework has allowed for YouTube and Vimeo users to upload their own videos, Amazon and Yelp to offer countless user reviews, craigslist to host classified ads, and Facebook and Twitter to offer social networking to hundreds of millions of Internet users. Given the sheer size of user-generated websites (for example, Facebook alone has more than 1 billion users, and YouTube users upload 100 hours of video every minute), it would be infeasible for online intermediaries to prevent objectionable content from cropping up on their site. … In short, Section 230 is perhaps most influential law to protect the kind of innovation that has allowed the internet to thrive since 1996.” So, what would the internet look like without Section 230? What would change?
It’s hard to know for sure, but I think the internet would look a lot more like a newspaper or television station. It would certainly be more one-way. People would be receiving information, but they might not be sharing information with one another because websites and platforms would face tremendous liability without Section 230.
So, there would definitely be no Facebook or Twitter. News organizations like CNN could exist and post things, and you could access things like Netflix over the internet, but basically it seems like all the social media would be gone?
At least in its current form. We don’t know for sure because since 1996 we’ve had Section 230 so we don’t know quite how the courts would interpret the existing First Amendment protections, which are much weaker. Social media would be much more limited, but not just social media but also other sites that rely on user content such as Yelp. Under the First Amendment the most protection a website could receive is that they would be protected until they receive knowledge of content that might be defamatory. So, let’s say you are a business. You are not happy with a one-star Yelp review, so you send a notice to Yelp saying this is defamatory. Without Section 230, Yelp would then be on notice and face the choice of either having to take down the review immediately or defend the accuracy of the review, which they wouldn’t want to do on such a large scale. So, Yelp would basically be filled with five-star reviews, and Yelp then wouldn’t be as useful for consumers anymore.
You could also have a situation where it’s heavily moderated. Might some companies just go the other direction and say “it’s a free-for-all, we’re just a neutral forum that people can post things on?” Is that a possibility?
It is. The whole reason for Section 230 is that there was this rule saying that distributors of content such as bookstores and news stands are only liable for third-party content if they knew or should have known that it was illegal. In the early 90s, we had these services like CompuServe and Prodigy as well as other companies. CompuServe’s business model was that they were not going to moderate any third-party content. Prodigy, on the other hand, wanted to distinguish itself and be a family friendly service, so it had moderators and content policies. Both CompuServe and Prodigy were sued for defamation based on third-party content. CompuServe’s case was dismissed because the court says is, “CompuServe, you’re like a bookstore — you had no reason to know that it was illegal or defamatory — so we are going to dismiss the case.” But Prodigy was held to be just like a newspaper as a publisher of the content, so Prodigy did not receive this protection. So as of this ruling in 1995, the First Amendment incentives were to not do any moderation, because the online services feared that if they started to moderate, then they would face tremendous liability. And that’s really what prompted Section 230 to be passed.
What can we say with some sort of authority about the legislative intent of Section 230?
Based on the limited legislative history of the floor debates, my interviews with both Chris Cox and Ron Wyden who were in the US House at the time, and people in industry and civil liberties groups who worked on the bill, there were really two goals. The first was to encourage this moderation, to say that we’d rather have companies and their users determine the rules of the road are instead of the government. The second was to promote innovation, and that’s where you had Chris Cox, a moderate Republican, and Ron Wyden, who’s a liberal / somewhat libertarian Democrat from Portland, who both wanted to promote the growth of the tech industry. Both of them had tech companies in their districts and recognized the potential for jobs and economic prosperity so they didn’t want to over-regulate it. It was really these two goals that they were looking at when they wrote and proposed Section 230.
So it has been important. Is it too far to say that it was a key element in creating many tech companies and the internet writ large today?
Absolutely. The current structure of the internet and the current business model of many large and even small platforms that rely on user-generated content would look very different without Section 230. I can’t think of any other single law that has had more impact on the internet as we know it today.
Yet there are a lot of critics that think it is failing, or it is insufficient, or was only okay for a previous time when these companies were smaller than they are today. Some people think companies aren’t moderating enough content, and others think it’s too much moderation. As the historian Niall Ferguson said in the Wall Street Journal:
“Dominance of online advertising by Alphabet and Facebook, coupled with immunity from civil liability under legislation dating back to the 1990s, have created an extraordinary state of affairs. The biggest content publishers in history are regulated as if they are mere technology startups; they are a new hierarchy extracting rent from the network.”
From his other writings, I know that Ferguson would like to get rid of Section 230. He thinks that these companies are now big, they’re not operating as neutral platforms, and they get Section 230 immunity while moderating in a biased way. That’s his perspective. Then on the left, people argue that these companies aren’t doing enough: they’re not stopping Russian election interference, they’re allowing white supremacy to rise, and they’re allowing deep fakes of democratic politicians to spread. What do you what do you make of each side?
What’s overarching this debate is that people are frustrated with the largest platforms. I understand that, and I agree with a lot of that criticism. How much of that criticism is linked to Section 230? That’s a different question.
I’ll start with what is linked to Section 230. It is true that there are some cases where both large and small platforms have not thoughtfully moderated and have not been transparent about what they’re doing for moderation. There are several cases that I wrote about in the book — really hard cases — where people have been wronged and the platform has not had entirely clean hands and yet Section 230, by way of its terms and how its interpreted, protects them. I think that’s the strongest criticism of Section 230. What’s not a strong criticism of Section 230 is the claim that the platforms are not neutral. I’ve spent two years looking at the history of Section 230, and platforms were never supposed to be neutral. the whole point of Section 230 is that Congress didn’t want them to have a hands-off approach.
They wanted companies to moderate harmful content, but did they anticipate a company getting big, being influential, and then suppressing one political view or another? Was the objectionable content just thought to be illegal activity, or did they assume that there might be political strife?
Well, Section 230 has a second section called C2 which states that, in addition to this immunity, you’re also not going to be held liable for good-faith efforts to restrict or block access to lewd, lascivious, or other objectionable content. Now, true, at the time the big concern was children’s access to pornography, so I’m not aware of any discussion about concerns over political suppression. In fact, one of the purposes of Section 230 is to foster political discourse. The controversies that I’ve seen have been when someone violates a policy of a platform against “hate speech” with a comment that someone else might classify as legitimate political discourse. But other than one knitting site, I’m not aware of any platforms limiting political viewpoints.
Still, critics say that’s what they are de facto doing.
I think this just shows how difficult moderation is, because if they took an entirely hands-off approach, I think there would even be more criticism from people saying “why are you leaving that online?” And there are some smaller platforms that do leave everything up. Do we want the internet to look like that? I have a five-year-old daughter. If that’s what the internet looks like, she’s going to be using books and pens and papers for the rest of her life.
I want to go through some of the myths and clarify what we have been talking about, because I often see arguments claiming that Section 230 granted special immunity to internet platforms but only on the condition that they are politically neutral. Supposedly, that was the grand bargain. From what you are saying it does not sound like that was the grand bargain.
All I can say is I’ve spoken with both former members of the House now, Senator Wyden and former Congressman Cox who wrote the bill, and they say that’s not true. Chris Cox recently wrote a Wall Street Journal op-ed on the topic. I also spoke with the lobbyists from who were on the civil liberties side and the tech companies’ side at the time, who worked with Cox and Wyden, and they also told me that’s not true. What they might be trying to say is that Section 230 was supposed to foster political discourse, which is true.
What about the special immunity part of that critique? Is there something special that these companies were given that other companies were not given? In the book you talk about internet exceptionalism, too. Was the immunity special?
Oh, absolutely. This is something most industries would love to have. Obviously, I think the unique part about this — and this is where I’m not an absolutist on Section 230, and I’m open to some thoughtful changes — is that of course it’s for the companies. They couldn’t exist in their current forms without it. That said, it’s also a benefit for society in the sense that it provides these extraordinary free speech protections. This can have negative effects too, depending on what that speech is, but we have to look at Section 230 much more holistically and not just say, “well, it’s a benefit for the companies.” Yeah, that’s true.
And it’s not just a benefit for Google and Facebook either, but for any company that’s going to really operate online, from traditional media to social media. Right?
It is. You see many mid-size platforms that really benefit from Section 230. And from looking at how these mid-size platforms handle user content, I’ve found they’re actually among the most thoughtful platforms in terms of how they develop their policies and how they listen to their users, which is what Section 230 is designed to encourage.
Whenever I tweet about Section 230, someone will say “look, these companies have a big choice to make. They can either be a neutral platform or a publisher. That is the binary choice, and they must decide.” Is that pulled out of the law in any way?
No, I don’t know what they’re talking about. I always hesitate to even respond to this, because I’m not quite sure what their parameters are for publisher or platform. Are they saying you don’t do any moderation at all? A free for all? I seriously would not want to be on the internet at all if that was the case.
Last week, The Verge ran an article by Casey Newton about the life of social media moderators and part of it was about the content that they get at this rapid pace, the worst elements of humanity possible. So, I don’t know where these people are drawing the line between platforms and publishers. All I can say is that Congress, at least at the time, did not intend that distinction. Whether they want to make that distinction in the future is going to be a policy choice for Congress.
You said you’re not an absolutist, and you’re not someone who says that these 26 words are perfect. So, what are these companies doing wrong?
As an example, last year Congress amended Section 230 to create an exemption for sex trafficking because there were a number of cases where minors were being trafficked and sold for sex on a website called Backpage. I testified in the house Judiciary Committee for a limited exception based on intentionally facilitating sex trafficking. The end result was not nearly as effective as I thought it would be, because it swept in more behavior and caused a chilling effect. Nonetheless, I’m open to those sorts of changes.
In terms of what companies are doing wrong, the biggest problem is a lack of transparency among the larger platforms in particular. As part of my job, I do a lot of national security work dealing with the intelligence agencies, and the large platforms operate at the same level of secrecy that the intelligence agencies operate at. And they shouldn’t do that.
Why do they do that?
I think there’s some degree of arrogance they operate like private startups, even when they’re massive companies that have a greater market cap than the automakers. I can’t stress enough that for Section 230 to survive, the platforms need to be much better at explaining precisely how they do things. They all post policies, but you don’t really know how they’re implementing the policies and what goes on in the decision-making process. That said, they’ve gotten better. The larger ones have started to recognize this in the past year or two, but it might be too late.
There are also some bad acting platforms, that range from lazy to malicious, they are the minority but they make it very difficult to justify having Section 230. They really push the limits. For example, in the Second Circuit, there was a man who had a breakup, and his ex decided to get revenge on him by going onto Grindr and posting his pictures, and his work and home address, basically inviting strangers to come to his home and work demanding sex. Hundreds of people came to his home and work very aggressively, and he repeatedly tried to get Grindr to stop this. There were a few different apps involved and the other apps did stop it, but based on the court record Grindr did not do nearly enough to stop this. He could have been killed. He sued, and the Second Circuit affirmed the dismissal under Section 230 saying this was user content. Those kinds of cases make it really hard to defend Section 230.
Indeed, in the book you share some of the most interesting court cases I’ve read showing how this law has evolved throughout its history. If you want to find compelling victims, there are many of them who have suffered harms and yet this immunity has withstood those difficult edge cases. So you mention transparency in the book. Does there also need to be an appeal mechanism so that these edicts aren’t just handed down and you are gone?
I think so. That’s starting to be developed and it really varies by platform. There are a lot of proposals out there to create different levels of appeals, both for the policies as well as the actual moderation decisions. The difficulty is that there’s just so much content out there, so making appeals effective is difficult. But absolutely there has to be more than more than one level of review.
I do wonder that if the end result is still that certain content or people get booted off then the criticism will just become “great, you’re transparent but we lost our appeal so you are still a biased platform.”
Yes, exactly. I mean I think that you’re never going to make moderation decisions that satisfy everyone.
Senator Hawley, for instance, has proposed that for companies to get 230 immunity, the Federal Trade Commission would have to certify that the company has not engaged in “politically biased moderation.” What do you think of that?
First, I’ll give this caveat: I’m only speaking on my own behalf, not the Naval Academy or Department of Defense. I understand where the concern for potential abuses come from, because we have platforms with billions of users, and if those platforms decide we don’t like this person and make the unilateral decision to block them, I personally think that is a scary scenario. I’m not aware of anything like that happening for pure political affiliation. Platforms have exercised judgment about hate speech and other sorts of content that platforms have determined they don’t want on their services.
I get the concerns, but I have not seen any evidence showing that this is a systemic problem. My concern about any such proposal for having the FTC judge a platform’s political neutrality is, I’m just not sure how they’d make these determinations about political neutrality. You could effectively end up having two FTC commissioners blocking a website from receiving Section 230 immunity if they don’t agree that there is political neutrality. I don’t know why the FTC would be enforcing this, and also I worry whenever you have political appointees making these sorts of judgments about platforms.
Suppose Facebook’s Mark Zuckerberg says, “I don’t like the direction the country is going. Facebook is going to become a woke, #resistance platform. We’re not allowing any Trump material on Facebook.” What should happen to Facebook? Anything? Or is that Facebook’s decision?
There are a few different outcomes. One would be the business outcome — that would horrific, because you’d have a large component of the country being excluded. Yes, theoretically Facebook could make that decision, but I think it would destroy their business. I think a lot of valid concerns are coming up, but many of them are consumer protection / antitrust issues about the company in general, not just user content issues. But what is the mechanism to address it? Is it Section 230? Chris Cox, for example, proposed having the platforms publicly post their moderation policies, then face regulatory actions if they don’t comply with their own moderation policies.
Do you think that demonetizing a YouTube channel is an acceptable form of content moderation? This is sort of an in-between response, is that a good idea?
It really depends on what the actual content at issue is, but it should be on the table. There needs to be more user input into these decisions, because a lot of times it’s a black box on both the policy and the implementation sides, and I don’t think we can have it be like that anymore.
Have you ever written a revised 230 — is there something you would change or amend?
Yes, there are a few different ways to go about it, and I think everyone would be angry at me for proposing some of these things. And these are geeky, technical changes. But one of the issues I talked about in the book is that a handful of platforms end up not getting Section 230 immunity because they are found to have participated in the creation of the content. Remember, 230 only covers third-party created content. The problem is that in Section 230 cases, the issue is often decided at a very early stage before discovery, so the plaintiff doesn’t have the opportunity to gather the evidence that the platform contributed to the content. So what I’m open to seeing is a thoughtful way where, once the judge receives sufficient indication that the platform may have contributed to the content, the judge can allow discovery before ruling on the Section 230 motion.
Also, in Section 230 there has always been an exception for federal criminal law enforcement, but not state criminal laws. I would be open to an exemption which says if there’s a state criminal law that falls within the contours of a federal criminal law, this could be enforced by the state’s AG. But we would have to look at how exactly we would tailor that exception.
Ultimately, do you think there will be material changes in this law? This is a great moment to have this book out, and I imagine people from Capitol Hill are talking to you — do you think change is coming?
I don’t know. I used to be a journalist in DC and I was terrible at predicting the outcome of political issues, because usually laws end up getting added into an omnibus appropriations bill right before Christmas break. It’s hard to tell and entropy is a very powerful force in in Washington, DC. On one hand, you have Nancy Pelosi and Ted Cruz agreeing on something, which doesn’t happen all that often. They both criticized Section 230, but they’re criticizing it for very different reasons, so I doubt that repealing Section 230 would achieve both of the ends they are looking for. I will say though, there’s never been a time when Section 230 has been more under the microscope more than it is right now.
There are no comments available.
1789 Massachusetts Avenue, NW, Washington, DC 20036
© 2019 American Enterprise Institute