FACEBOOK HAS A RIGHT TO BLOCK ‘HATE SPEECH’—BUT HERE’S WHY IT SHOULDN’T – Part Two

By - CTL
February 19, 2019
0

By Brian Amerige

 “In this fascinating two part series a real FACEBOOK insider gives his view on Censorship and FACEBOOK”.

 

“Unassailable” truths turn brittle with age, leaving them open to popular suspicion. To maintain the strength of our values, we need to watch them sustain the weight of evidence, argument and refutation. Such a free exchange of ideas will not only create the conditions necessary for progress and individual understanding, but also cultivate the resilience that much of modern culture so sorely lacks.

But let’s now come down to ground level, and focus on how Facebook’s policies actually work.

When a post is reported as offensive on Facebook (or is flagged by Facebook’s automated systems), it goes into a queue of content requiring human moderation. That queue is processed by a team of about 8,000 (soon to be 15,000) contractors. These workers have little to no relevant experience or education, and often are staffed out of call centers around the world. Their primary training about Facebook’s Community Standards exists in the form of a 1,400 pages of rules spread out across dozens of PowerPoint presentations and Excel spreadsheets. Many of these workers use Google Translate to make sense of these rules. And once trained, they typically have eight to 10 seconds to make a decision on each post. Clearly, they are not expected to have a deep understanding of the philosophical rationale behind Facebook’s policies.

As a result, they often make wrong decisions. And that means the experience of having content moderated on a day-to-day basis will be inconsistent for users. This is why your own experience with content moderation not only probably feels chaotic, but is (in fact) barely better than random. It’s not just you. This is true for everyone.

Inevitably, some of the moderation decisions will affect prominent users, or frustrate a critical mass of ordinary users to the point that they seek media attention. When this happens, the case gets escalated inside Facebook, and a more senior employee reviews the case to consider reversing the moderation decision. Sometimes, the rules are ignored to insulate Facebook from “PR Risk.” Other times, the rules are applied more stringently when governments that are more likely to fine or regulate Facebook might get involved. Given how inconsistent and slapdash the initial moderation decisions are, it’s no surprise that reversals are frequent. Week after week, despite additional training, I’ve watched content moderators take down posts that simply contained photos of guns—even though the policy only prohibits firearm sales. It’s hard to overstate how sloppy this whole process is.

There is no path for something like this to improve. Many at Facebook, with admirable Silicon Valley ambition, think they can iterate their way out of this problem. This is the fundamental impasse I came to with Facebook’s leadership: They think they’ll be able to clarify the policies sufficiently to enforce them consistently, or use artificial intelligence (AI) to eliminate human variance. Both of these approaches are hopeless.

Iteration works when you’ve got a solid foundation to build on and optimize. But the Facebook hate speech policy has no such solid foundation because “hate speech” is not a valid concept in the first place. It lacks a principled definition—necessarily, because “hateful” speech isn’t distinguishable from subjectively offensive speech—and no amount of iteration or willpower will change that.

Consequently, hate speech enforcement doesn’t have a human variance problem that AI can solve. Machine learning (the relevant form of AI) works when the data is clear, consistent, and doesn’t require human discretion or context. For example, a machine-learning algorithm could “learn” to recognize a human face by reference to millions of other correctly identified human-face images. But the hate speech policy and Facebook’s enforcement of it is anything but clear and consistent, and everything about it requires human discretion and context.

Case in point: When Facebook began the internal task of deciding whether to follow Apple’s lead in banning Alex Jones, even that one limited task required a team of (human) employees scouring months of Jones’ historical Facebook posts to find borderline content that might be used to justify a ban. In practice, the decision was made for political reasons, and the exercise was largely redundant. AI has no role in this sort of process.

No one likes hateful speech, and that certainly includes me. I don’t want you, your friends, or your loved ones to be attacked in any way. And I have a great deal of sympathy for anyone who does get attacked—especially for their immutable (meaning unimportant, as far as I’m concerned) characteristics. Such attacks are morally repugnant. I suspect we all agree on that.

But given all of the above, I think we’re losing the forest for the trees on this issue. “Hate speech” policies may be dangerous and impractical, but that’s not true of anti-harassment policies, which can be defined clearly and applied with more clarity. The same is true of laws that prohibit threats, intimidation and incitement to imminent violence. Indeed, most forms of interpersonal abuse that people expect to be covered by hate speech policies—i.e., individual, targeted attacks—are already covered by anti-harassment policies and existing laws.

So the real question is: Does it still make sense to pursue hate speech policies at all? I think the answer is a resounding “no.” Platforms would be better served by scrapping these policies altogether. But since all signs point to platforms doubling down on existing policies, what’s a user to do?

First, it’s important to recognize that much of the content that violates Facebook’s content policy never gets taken down. I’d be surprised if moral criticism of religious groups, for example, resulted in enforcement by moderators today, despite being (as I noted above) technically prohibited by Facebook’s policy. This is a short-lived point, because Facebook is actively working on closing this gap, but in the meantime, I’d encourage you to not let the policies get in your way. Say what you think is right and true, and let the platforms deal with it. One great aspect of these platforms being private (despite some clamoring for them to be considered “public squares”) is that the worst they can do is kick you off. They can’t stop you from using an alternate platform, starting an offline movement, or complaining loudly to the press. And, most importantly, they generally can’t throw you in jail—at least not in the United States.

Second, we should be mindful of the full context—that social media can be both powerfully good and powerfully bad for our lives—when deciding how to use it. For most of us, it’s both good and bad. The truth is, social media is a new phenomenon and frankly no one—including me and my former colleagues at Facebook—has figured out how to perfect it. Facebook should acknowledge this and remind everyone to be mindful about how they use the platform.

That said, you shouldn’t wait for Facebook to figure out how to properly contextualize everything you see. You can and should take on that responsibility yourself. Specifically, you should recognize that what you find immediately engaging isn’t the same thing as what’s true, let alone what’s moral, kind, or just. Simply acknowledging this goes a long way toward correctly framing an intellectually and emotionally healthy strategy for using social media. The content you are drawn to—and that Facebook’s ranking promotes—may be immoral, unkind or wrong—or not. This kind of vigilant awareness builds resilience and thoughtfulness, rather than dependence on a potentially Orwellian-institution to insulate us from thinking in the first place.

Whether that helps or not, we should recognize that none of us are entitled to have Facebook (or any other social media service) work these issues out to our satisfaction. Like Twitter and YouTube, Facebook is a private company that we interact with on a wholly voluntary basis—which, should mean “to mutual benefit.” As customers, we should give them feedback when we think they’re screwing up. But they have a moral and legal right to ignore that feedback. And we have a right to leave, to find or build alternate platforms, or to decide that this whole social media thing isn’t worth it.

The fact that it would be hard to live without these platforms—which have been around for barely more than a decade—shows how enormously beneficial they’ve become to our lives and the way we live. But the fact that something is beneficial and important does not entitle us to possess it. Nor do such benefits entitle us to demand that governments forcibly impose our will upon those who own and operate such services. Facebook could close up shop tomorrow, and that’d be that.

By all means, Facebook deserves much of the criticism it gets. But don’t forget: we’re asking them to improve. It’s a request, not a demand. So let’s keep the sense of entitlement in check.

Governments, likewise, should respect the fact that these are private companies and that their platforms are their property. Governments have no moral or legal right to tell them how to operate as long as they aren’t violating our rights—and they aren’t. Per the above, regardless of how much we benefit from these platforms or how important we might conclude they’ve become, we do not have a right to have access to them, or have them operate the way we’d like. So as far as the government ought to be concerned, there are no rights violations happening here, and that’s that.

Many argue that what Facebook and other platforms are doing amounts to “censorship.” I disagree. It comes down to the fundamental difference between a private platform refusing to carry your ideas on their property, and a government prohibiting you from speaking your ideas, anywhere, with the threat of prosecution. These are categorically different. The former is distasteful, unwise, and yes, perhaps even a tragic loss of opportunity; the latter infringes on our right to free speech. What’s more, a system of government oversight wouldn’t work, anyway: The entire issue with speech policies is that having anyone decide for you what speech is acceptable is a dangerous idea. Asking a government to do this rather than Facebook is trading a bad idea for a truly Orwellian idea. Such a move would be a far more serious threat to free speech than anything we’ve seen in the United States to date.

Unfortunately, executives at Facebook and Twitter have both been very clear that they think regulation is “inevitable.” They’ve even offered to help draft the rules. But such statements don’t confer upon the government a moral right to regulate these platforms. Whether a company or a person invites a violation of their rights is immaterial to the legitimacy (morally and legally) of such a rights violation. Rights of this type cannot be forfeited.

Moreover, the fact that these huge platforms are open to regulation shouldn’t come as a surprise. Facebook and Twitter are market incumbents, and further regulation will only serve to cement that status. Imposing government-mandated standards would weaken or prohibit competition, effectively making them monopolies, in the legitimate sense, for the first time. Unlike potential new platforms, Facebook and Twitter have the capital and staff to handle onerous, complicated, and expensive new regulations. They can hire thousands of people to review content, and already have top-flight legal teams to handle the challenge of compliance. The best thing governments can do here is nothing. If this is a serious enough issue—and I think it is—competition will emerge if it’s able to do so.

We are the first human beings to witness the creation and growth of a platform that has more users than any country on the planet has people. And with that comes both triumph and failure at mind-bending scale. I’ve had the privilege of witnessing much of this from the inside at Facebook, and the biggest lesson I learned is this: When incredible circumstances create nuanced problems, that is precisely when we need principled thinking the most—not hot-takes, not pragmatic, range-of-the-moment action. Principles help us think and act consistently and correctly when dealing with complex situations beyond the scope of our typical intuitive experience.

That means that platforms, users, and governments need to go back to their fundamental principles, and ask: What is a platform’s role in supporting free expression? What responsibility must users take for our own knowledge and resilience? What does it mean for our government to protect our rights and not just “ban the bad”? These are the questions that I think should guide a principled approach toward platform speech.

Leave a Reply

Your email address will not be published. Email and Name is required.