FACEBOOK HAS A RIGHT TO BLOCK ‘HATE SPEECH’—BUT HERE’S WHY IT SHOULDN’T (Part One)

By - CTL
February 11, 2019
0

By Brian Amerige.

“In this fascinating two part series a real FACEBOOK insider gives his view on Censorship and FACEBOOK”.

In late August, I wrote a note to my then-colleagues at Facebook about the issues I saw with political diversity inside the company. You may have read it, because someone leaked the memo to the New York Times, and it spread outward rapidly from there. Since then, a lot has happened, including my departure from Facebook. I never intended my memos to leak publicly—they were written for an internal corporate audience. But now that I’ve left the company, there’s a lot more I can say about how I got involved, how Facebook’s draconian content policy evolved, and what I think should be done to fix it.

My job at Facebook never had anything to do with politics, speech, or content policy—not officially. I was hired as a software engineer, and I eventually led a number of product teams, most of which were focused on user experience. But issues related to politics and policy were central to why I had come to Facebook in the first place.

When I joined the Facebook team in 2012, the company’s mission was to “make the world more open and connected, and give people the power to share.” I joined because I began to recognize the central importance of the second half of the mission—give people the power to share—in particular. A hundred years from now, I think we’ll look back and recognize advances in communication technologies—technologies that make it faster and easier for one person to get an idea out of their head and in front someone else (or the whole world)—as underpinning some of the most significant advances in human progress that humanity has ever witnessed. I still believe this. It’s why I joined Facebook.

And for about five years, we made headway. Both the company and I had our share of ups, downs, growth and setbacks. But, by and large, we aspired to be a transparent carrier of people’s stories and ideas. When my team was building the “Paper” Facebook app, and then the later redesigned News Feed, we metaphorically aspired for our designs to be like a drinking glass: invisible. Our goal was to get out of the way and let the content shine through. Facebook’s content policy reflected this, too. For a long time, the company was a vociferous (even if sometimes unprincipled) proponent of free speech.

As of 2013, this was essentially Facebook’s content policy: “We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying).”

By the time the 2016 U.S. election craze began (particularly after Donald Trump secured the Republican nomination), however, things had changed. The combination of Facebook’s corporate encouragement to “bring your authentic self to work” along with the overwhelmingly left-leaning political demographics of my former colleagues meant that left-leaning politics had arrived on campus. Employees plastered up Barack Obama “HOPE” and “Black Lives Matter” posters. The official campus art program began to focus on left-leaning social issues. In Facebook’s Seattle office, there’s an entire wall that proudly features the hashtags of just about every left-wing cause you can imagine—from “#RESIST” to “#METOO.”

In our weekly Q&As with Mark Zuckerberg (known internally as “Zuck”), the questions reflected the politicization. I’m paraphrasing here, but questions such as “What are we doing about those affected by the Trump presidency?” and “Why is Peter Thiel, who supports Trump, still on our board?” became common. And to his credit, Zuck always handled these questions with grace and clarity. But while Mark supported political diversity, the constant badgering of Facebook’s leadership by indignant and often politically intolerant employees increasingly began to define the atmosphere.

As this culture developed inside the company, no one openly objected. This was perhaps because dissenting employees, having watched the broader culture embrace political correctness, anticipated what would happen if they stepped out of line on issues related to “equality,” “diversity,” or “social justice.” The question was put to rest when “Trump Supporters Welcome” posters appeared on campus—and were promptly torn down in a fit of vigilante moral outrage by other employees. Then Palmer Luckey, boy-genius Oculus VR founder, whose company we acquired for billions of dollars, was put through a witch hunt and subsequently fired because he gave $10,000 to fund anti-Hillary ads. Still feeling brave?

It’s not a coincidence that it was around this time that Facebook’s content policy evolved to more broadly define “hate speech.” The internal political monoculture and external calls from left-leaning interest groups for us to “do something” about hateful speech combined to create a sort of perfect storm.

As the content policy evolved to incorporate more expansive hate speech provisions, employees who objected privately remained silent in public. This was a grave mistake, and I wish I’d recognized the scope of the threat before these values became deeply rooted in our corporate culture. The evolution of our content policy not only risked the core of Facebook’s mission, but jeopardized my own alignment with the company. As a result, my primary intellectual focus became Facebook’s content policy.

I quickly discovered that I couldn’t even talk about these issues without being called a “hatemonger” by colleagues. To counter this, I started a political diversity effort to create a culture in which employees could talk about these issues without risking their reputations and careers. Unfortunately, while the effort was well received by the 1,000 employees who joined it, and by most senior Facebook leaders, it became clear that they were committed to sacrificing free expression in the name of “protecting” people. As a result, I left the company in October.

The posters that kicked off the “FB’ers for Political Diversity” group. The quotes come from Facebook employees who’d reached out to me after a post I wrote criticizing left-leaning art turned into a moral-outrage mob that tried to make me apologize for offending colleagues

 

Let’s fast-forward to present day. This is Facebook’s summary of their current hate  speech policy:

“We define hate speech as a direct attack on people based on what we call protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.

The policy aims to protect people from seeing content they feel attacked by. It doesn’t just apply to direct attacks on specific individuals (unlike the 2013 policy), but also prohibits attacks on “groups of people who share one of the above-listed characteristics.”

If you think this is reasonable, then you probably haven’t looked closely at how Facebook defines “attack.” Simply saying you dislike someone with reference to a “protected characteristic” (e.g., “I dislike Muslims who believe in Sharia law”) or applying a form of moral judgment (e.g., “Islamic fundamentalists who forcibly perform genital mutilation on women are barbaric”) are both technically considered “Tier-2“ hate speech attacks, and are prohibited on the platform.

This kind of social-media policy is dangerous, impractical, and unnecessary.

The trouble with hate speech policies begins with the fact that there are no principles that can be fairly and consistently applied to distinguish what speech is hateful from what speech is not. Hatred is a feeling, and trying to form a policy that hinges on whether a speaker feels hatred is impossible to do. As anyone who’s ever argued with a spouse or a friend has experienced, grokking someone’s intent is often very difficult.

As a result, hate speech policies rarely just leave it at that. Facebook’s policy goes on to list a series of “protected characteristics” that, if targeted, constitute supposedly hateful intent. But what makes attacking these characteristics uniquely hateful? And where do these protected characteristics even come from? In the United States, there are nine federally protected classes. California protects 12. The United Kingdom protects 10. Facebook has chosen 11 publicly, though internally they define 17. The truth is, any list of protected characteristics is essentially arbitrary. Absent a principled basis, these are lists that are only going to expand with time as interest and identity groups claim to be offended, and institutions cater to the most sensitive and easily offended among us.

The inevitable result of this policy metastasis is that, eventually, anything that anyone finds remotely offensive will be prohibited. Mark Zuckerberg not only recently posted a note that seemed to acknowledged this, but included a handy graphic describing how they’re now beginning to down-rank content that isn’t prohibited, but is merely borderline.

Graph contained in Mark Zuckerberg’s November 15, 2018 post titled, ‘A Blueprint for Content Governance and Enforcement’

Almost everything you can say is offensive to somebody. Offense isn’t a clear standard like imminent lawless action. It is subjective—left up to the offended to call it when they see it.

On one occasion, a colleague declared that I had offended them by criticizing a recently installed art piece in Facebook’s newest Menlo Park office. They explained that as a transgender woman, they felt the art represented their identity, told me they “didn’t care about my reasoning,” and that the fact they felt offended was enough to warrant an apology from me. Offense (or purported offense) can be wielded as a political weapon: An interest group (or a self-appointed representative of one) claims to be offended and demands an apology—and, implicitly with it, the moral and political upper hand. When I told my colleague that I meant what I said, that I didn’t think it was reasonable for them to be offended, and, therefore, that I wouldn’t apologize, they were left speechless—and powerless over me. This can be awkward and takes social confidence to do—I don’t want to offend anyone—but the alternative is far worse.

Consider Joel Kaplan, Facebook’s VP for Global Public Policy—and a close friend of recently confirmed U.S. Supreme Court Justice Brett Kavanaugh—who unnecessarily apologized to Facebook employees after attending Kavanaugh’s congressional hearing. Predictably, after securing an apology from him, the mob didn’t back down. Instead, it doubled down. Some demanded Kaplan be fired. Others suggested Facebook donate to #MeToo causes. Still others used the episode as an excuse to berate senior executives. During an internal town hall about the controversy, employees interrupted, barked and pointed at Zuck and Sheryl Sandberg with such hostility that several long-time employees walked away, concluding that the company “needed a cultural reset.” The lesson here is that while “offense” is certainly something to be avoided interpersonally, it is too subjective and ripe for abuse to be used as a policy standard.

Perhaps even more importantly, you cannot prohibit controversy and offense without destroying the foundation needed to advance new ideas. History is full of important ideas, like heliocentrism and evolution, that despite later being shown to be true were seen as deeply controversial and offensive because they challenged strongly held beliefs. Risking being offended is the ante we all pay to advance our understanding of the world.

But let’s say you’re not concerned about the slippery slope of protected characteristics, and you’re also unconcerned with the controversy endemic to new ideas. How about the fact that the truths you’re already confident in—for example, that racism is abhorrent—are difficult to internalize if they are treated as holy writ in an environment where people aren’t allowed to be wrong or offend others? Members of each generation must re-learn important truths for themselves (“Really, why is racism bad?”).

Brian Amerige is a former senior engineering manager at Facebook. You can follow him on Twitter @brianamerige First Publised on Quillette.com

Leave a Reply

Your email address will not be published. Email and Name is required.