Who should police content on the Internet?

The beauty, and the danger, of the world wide web is that it is open to everybody. Additionally, it introduces new challenges for how to take care of the inevitable bad things that come together with the good.

Last week, this question has come back into the foreground with the Charlottesville riots and the affiliated far-right sites that helped arrange them. Especially in focus has been the site”The Daily Stormer”, among the most vocal/violent/awful neo-nazi websites online. Lately, all the infrastructure providers that served the Daily Stormer have dropped it, and it’s relocated to a Russian domain. As of this writing, it seems that Anonymous has DDOS’d dailystormer.ru and it’s offline.

One of the companies which originally resisted falling the Stormer, but finally did, was (USV portfolio company) Cloudflare. Cloudflare has taken heat for quite a while now because of its insistence to not fall the Stormer, dating back to this ProPublica post from May. In Cloudflare’s answer to this article, CEO Matthew Prince added the following:

“Cloudflare is much more akin to a community than a hosting provider. I would be deeply troubled if my ISP started restricting what kinds of content I can get. As a system, we do not think it’s suitable for Cloudflare to be making those constraints either.

That isn’t to say we support all of the content that passes through Cloudflare’s network. There are institutions — law enforcement, legislatures, and courts — who have a political and social legitimacy to ascertain what content is legal and illegal. We follow the lead of these organizations in most of the jurisdictions we operate. But, as more and more of the web sits behind fewer and fewer private businesses, we are worried that the political beliefs and biases of these organizations will determine what can and can’t be online.”

This is a tricky line to walk, but it is really really important to the underpinnings of the web. To know why, you need to consider all the terrible things that happen online daily — out of really bad things like neo-nazi genocide organizing (I am writing this as someone whose great grandfather was murdered for being a Jew) and child exploitation, all of the way to somewhat or arguably not-so-bad things such as,”I do not like what this person wrote on this site and I want it shot down”.

This is unsustainable for 2 reasons: 1) the pure scale of it, particularly for larger properties managing millions or billions (or even trillions, in the event of Cloudflare) pageviews and two ) platforms are nearly always not in the best position to produce a mere determination about whether a given piece of material is illegal or legal.

From the user/customer standpoint, if you consider it, you really don’t need your ISP, or DNS provider, or hosting provider making arbitrary decisions about what language is acceptable and what’s not.

Generally speaking (and I’m not a lawyer) this means that firms are legally insulated from content that someone else publishes in their own platform. If this were not true, then it would not be possible, from a risk perspective, to run any site that handled the address or content of others (think Facebook, Dropbox, GoDaddy, etc). If you had to be 100% sure that each and every piece of information that any user printed on your platform did not violate any laws everywhere, you would just not let anyone publish anything.

Through the years, whenever a new wave of lousy activity emerges on the net, there’s the inevitable conflict about who should be responsible for stopping it. This is exactly what the Stop Online Piracy Act (SOPA) of 2011 was about — that could have made net platforms directly responsible for any user-generated content which may have copyright offenses in it (rather than the present situation where websites must comply with valid takedown notices so as to maintain their immunity).

The really hard thing here, whether we are talking about piracy, or child abuse, or neo-nazis, is that tailoring a legislation that addresses those issues without having wider implications for free speech on internet platforms is truly hard.

It is not easy to do so, and it’s often unpopular (depending on who’s doing the talking ). But Ultimately, they decided to drop the Daily Stormer in the Cloudflare platform.

“This was my choice. Our terms of service reserve the right for us to complete users of our community at our sole discretion. My rationale for making this choice was simple: the people behind the Daily Stormer are assholes and I had had enough.

Allow me to be clear: this was an arbitrary choice. It was different than what I had spoke talked with our senior staff about yesterday. I phoned our legal team and told them exactly what we were going to perform. I called our Trust & Safety staff and had them stop the ceremony. It was a choice I could make because I am the CEO of a significant online infrastructure company.

Having decided we now have to discuss why it’s so dangerous. I will be posting something on the blog later today. Literally, I woke up in a bad mood and decided someone should not be permitted online.

This is intentionally provocative, and supposed to help everyone understand why it is dangerous to encourage large online **infrastructure** suppliers to take editorial control. For while it may look obvious that this is the ideal call in this instance, in fact, there are millions of other circumstances every day which are not so apparent, and around that we really should be planning to get due process to guide decisions.

I’d encourage you to read the followup bit on the Cloudflare blog talking why they terminated the Daily Stormer — in it Matthew details out each the sorts of players in the online infrastructure area, what role they perform, and how they affect free speech online.

In all this, there is a significant distinction between what platforms are **legally required** to preemptively take down, and what they’re **in their rights** to eliminate. A tension in the business is a hesitation to use corporate rights to remove content, at the risk of slipping towards a legal regime where platforms have a positive duty to remove content — this is what introduces the best dangers to free speech and due process.

Another essential point, which can be raised in the Cloudflare article, is the various roles played by different kinds of internet providers. There’s a gap between non invasive providers like DNS servers, backbone transit suppliers, etc.; and high-tech applications such as social networks, marketplaces, and other, more narrowly-focused applications. Broadly , the higher up in the pile you move, and the more competition there is at that layer, and the more specific your program or community, the more it makes sense to have community guidelines which restrict or direct what sorts of activities can occur on your platform.

Finally, none of this is to state that platforms do not and should not associate with law enforcement and other authorities to get rid of illegal content and poor actors. This is truly a huge part of what platforms do, daily, and it’s crucial to the safe performance of the net and of social programs.

But maybe the big takeaway here is that, as we continue to talk about where authorities and censorship should take place, we must fall back on the underlying belief that transparency, accountability and due process (rather than arbitrary decisions by powerful companies or external groups) are crucial components of any solution.

Leave a comment

Your email address will not be published. Required fields are marked *