Black activists who regularly use Facebook to discuss issues of racial justice are quite familiar with “Facebook jail.” That’s the nickname they’ve given it when your account is locked for a period of time for violating the posting rules. The problem is that what is allowed and what is considered inappropriate material that ultimately gets banned seems quite random and often ends up protecting the most privileged groups of people.
Black activists routinely talk about being banned for their posts about racism—particularly when they directly write things challenging or that call out white people about their racism. This is... curious. While those posts may be provocative and make folks uncomfortable, they are not violent or threatening. Meanwhile, significantly more disturbing and violent content seems to go unnoticed. It is as if in Facebook’s efforts to deal with hate speech, certain legitimate political expression gets censored while other kinds of hateful speech goes unchecked.
In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”
Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.
But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.
“All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.
This makes you wonder who and what exactly Facebook is protecting. The company says that its algorithms do not allow for attacks against larger groups of people (white people, in this instance) designated as protected categories. But specific “sub-categories” of people (radicalized Muslims) are fair game.
The reason is that Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. It gives users broader latitude when they write about “subsets” of protected categories. White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.
We get it. Facebook is a huge, worldwide company trying to develop very objective standards for how it addresses discrimination amongst its users. This cannot be an easy task. But the fact remains—not all people in society are positioned equally. Not all speech is fair and equal. Like it or not, it does have a specific meaning and send a very particular message when black female activists are repeatedly banned for their anti-racist posts when white men are free to post all manner of offensive and gross things about specific groups of people.
One document trains content reviewers on how to apply the company’s global hate speech algorithm. The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men. [...]
The company recently pledged to nearly double its army of censors to 7,500, up from 4,500, in response to criticism of a video posting of a murder. Their work amounts to what may well be the most far-reaching global censorship operation in history. It is also the least accountable: Facebook does not publish the rules it uses to determine what content to allow and what to delete.
There are no US government requirements for Facebook or other social media platforms to censor content. But in an increasingly polarized society, how do platforms protect users and monitor when free speech crosses the line into indirect or direct violence? And if banning is concentrated among a specific population, especially when that population is already a marginalized group, at what point does the site take responsibility and require accountability to do better?
“There is no path that makes people happy,” [Dave Willner, a member of Facebook’s content team said]. “All the rules are mildly upsetting.” This is not an enviable position for Facebook to be in. In a company guided by logic, objectivity and computer science, they no doubt struggle with how to tackle censorship and prevent users from utilizing hateful and degrading language. At the same time, the seemingly disproportionate impact on black people and other minorities using the platform to lift up issues of racial justice get sent a powerful message that our voices don’t count and that our speaking truth to power via social media is not welcome. Who knows what the future of this platform will bring? But if users have to go through such incredible lengths to get their messages out there, it is likely that they may gravitate toward other, less restrictive platforms in the future.