What you’re looking at above is a screenshot of a post I reported to Facebook as “hate speech.” This was Facebook’s response (bold mine):
Your report
Today at 3:13 PM
You anonymously reported XXXXXXXXX’s share for displaying hate speech.
<p></p>
Thanks for your feedback
Today at 4:16 PM
Thanks for your report - you did the right thing by letting us know about this.
The post was reviewed, and though it doesn't go against one of our specific Community Standards, we understand that it may still be offensive to you and others.
No one should have to see posts they consider hateful on Facebook, so we want to help you avoid things like this in the future.<p></p>
From the list above, you can block XXXXXXXXX directly, or you may be able to unfriend or unfollow them. If you unfollow them, you’ll stay friends on Facebook but you won’t see their posts in your News Feed.<p></p>
We know these options may not apply to every situation, so please let us know if you see something else you think we should review. You may also consider using Facebook to speak out and educate the community around you. Counter-speech in the form of accurate information and alternative viewpoints can help create a safer and more respectful environment.
I have been testing Facebook in the past couple of weeks to find out just what they consider “hate speech,” which is one of the reasons a reader can choose to report a post. To say I have been disappointed with Facebook’s responses is a massive understatement. The above example is only one case where they have essentially defended a vile post that promotes violence and hate. Note that I also reported the same post for violence and that, too, was rejected. (The green frog, for those who may be unaware, is a cartoon character known as Pepe the Frog that has been co-opted by the alt-right, white supremacist movement in recent years.)
What is obvious from those last two bolded sentences is that Facebook wants users to do the work they should be doing.
Facebook had a meeting today with civil rights groups who have been advocating (successfully) for an ad boycott of Facebook. The meeting didn’t go well, according to this story in the Washington Post:
Civil rights leaders organizing a major advertising boycott of Facebook said they remained unconvinced that the social network is taking enough action against hate speech and disinformation after meeting with Mark Zuckerberg and other Facebook executives on Tuesday.
Civil rights leaders used the session to press Chief Operating Officer Sheryl Sandberg and Zuckerberg, Facebook’s chief executive, to institute changes at Facebook, including installing a top-level executive who will ensure the global platform does not fuel racism and radicalization.
Color of Change President Rashad Robinson described the meeting as “disappointing” during a news conference later Tuesday. The organizers of the campaign, known as #StopHateForProfit, provided a list of demands to the social network days before the meeting, he said, and the company did not have clear responses to their recommendations.
“Attending alone is not enough,” said Robinson, who participated in the meeting over Zoom, which lasted over an hour. “At this point, we were expecting some very clear answers to the recommendations we put on the table. And we did not get them.”
…
“It was abundantly clear in our meeting today that Mark Zuckerberg and the Facebook team is not yet ready to address the vitriolic hate on their platform,” Greenblatt said.
To Facebook’s credit, my reports on violent threats have resulted in many more posts and comments being taken down.
But besides trying to understand just what crosses the line for Facebook in terms of hate speech, I have also wondered what Facebook’s criteria are for permanently removing someone who consistently posts provably false, violent and/or hate-filled content.
In my two-week experiment, I reported a number of individuals numerous times for posting violent threats and often succeeded in having the violent threats taken down. The response of these repeat offenders? Usually derisive laughter to their friends about having posts taken down, but little or no actual consequences. Some note an occasional time-out in “Facebook jail,” but few posters seem to be removed permanently. And many of these posters seem to be using multiple names, owning two, three or four different personal Facebook pages.
Obviously, software exists that could catch and remove these repeat offenders and their multiple accounts. (We know it is in use at Daily Kos.) But Facebook appears unwilling to employ such software. People repeatedly posting provably false and dangerous conspiracy theories, outright lies, smears, hate speech, violent imagery and violent threats continue to post.
Why?
Mark Zuckerberg and Sheryl Sandberg don’t seem interested in actually creating the environment they claim they want for Facebook. Facebook’s “Community Standards” explanation for hate speech includes the following:
We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.
We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We protect against attacks on the basis of age when age is paired with another protected characteristic, and also provide certain protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.
It goes on for a bit more after that, but given that the image at the top of this post didn’t qualify under their own definition of “hate speech,” it is hard to see just what does qualify.
The only thing Zuckerberg and Sandberg will understand is watching advertising revenue drain out of Facebook. Keep up the pressure and take money out of their pockets. Because all indications are that they do not care a whit if they are helping to foment violence and hate.