Dear Facebook: If This Isn’t Hate Speech, Then What Is?

Dear Facebook: If This Isn’t Hate Speech, Then What Is? 2015-05-21T10:53:11-05:00


Last month, I and many of my friends filed a complaint with Facebook
against an aggressively anti-Christian Facebook page titled “Virgin Mary Should’ve Aborted”.  Each of us received the following standardized response:

“Thank you for taking the time to report something that you feel may violate our Community Standards. Reports like yours are an important part of making Facebook a safe and welcoming environment. We reviewed the page you reported for containing hate speech or symbols and found it doesn’t violate our community standard on hate speech.”

One could argue that a page which mocks the deepest convictions of a majority of Americans—those who believe in Jesus Christ—by depicting a woman (Mary?) masturbating with a crumpled page from the scriptures, and which shows the unborn Christ in utero muttering the “f” word and Mary enjoying a cigar, is, in fact, a blatant form of hate speech.

But Facebook’s automated response generator doesn’t think so.  And after a brief review, the group’s administrators received an affirmation from Facebook, explaining that their page had been reinstated.  They published the response with a belligerent “Haha!”

THEN WHAT ABOUT THIS?

Facebook friend and pro-life speaker Rebecca Kiessling called attention to a new Facebook fan page, the title of which I won’t even post.  It’s called “F*** Jesus Christ and F*** Christianity”.  Its posts are tired atheist rants about evolution and science.  Its heading explains (with poor spelling and punctuation, and misguided theology), “The crucifix that you wear around your neck is an ancient torture device used to make the death of a criminal slow and painful.  When you put on or worship the crucifix you are in fact worshiping murder whether you admit it or not.”

Is this offensive?

Nope, same message comes back from Facebook.  There’s nothing wrong with this, they assure us.

BUT HOW ABOUT RAPE:  Rape is offensive, right?

Apparently, rape is not offensive.  The group RINJ (Rape Is No Joke) has opposed a number of pro-rape and rape joke pages on Facebook, arguing that removal of the pages would not be a violation of free speech in the context of Article 19 of the Universal Declaration of Human Rights and the concepts recognized in international human rights law in the International Covenant on Civil and Political Rights. RINJ has repeatedly challenged Facebook to remove the rape pages.  Finally, RINJ has turned to advertisers on Facebook, urging them not to let their advertising be posted on Facebook’s ‘rape pages’.

SO WHAT, EXACTLY, IS OFFENSIVE TO FACEBOOK’S ARBITRARY “COMMUNITY STANDARDS”?

Breastfeeding mothers are offensive.  Facebook’s “no bare breasts” policy in its decency code means that even if the baby is covering the nipple, posting a breastfeeding shot can alarm Facebook censors and get your post (or your entire Facebook account) removed.

And posts which are perceived as offensive by the Jewish, Muslim, and LGBT communities will be removed.

MY ADVICE TO CATHOLICS AND OTHER CHRISTIANS:  DON’T GIVE UP. 

Please continue to notify Facebook when you see instances of unfair discrimination, threats, and derogatory posts concerning your faith on Facebook.

*     *     *     *

Facebook’s vice president of global public policy, Marne Levine, has tried to explain the company’s confusing content policy on a page devoted to Facebook Safety.  The policy seems weighted toward gender issues:  deleting images and content that “threaten or incite gender-based violence or hate”.  Facebook has listened to complaints from the Jewish, Muslim and LGBT communities (although not to the Christians), and has established protective policies which ban offensive content regarding these groups.

In the letter, Levine explains:

Recently there has been some attention given to Facebook’s content policy. The current concern, voiced by Women, Action and The Media, The Everyday Sexism Project, and the coalition they represent, has focused on content that targets women with images and content that threatens or incites gender-based violence or hate. 

Many different groups which have historically faced discrimination in society, including representatives from the Jewish, Muslim, and LGBT communities, have reached out to us in the past to help us understand the threatening nature of content, and we are grateful for the thoughtful and constructive feedback we have received. In light of this recent attention, we want to take this opportunity to explain our philosophy and policies regarding controversial or harmful content, including hate speech, and to explain some of the steps we are taking to reduce the proliferation of content that could create an unsafe environment for users.

Facebook’s mission has always been to make the world more open and connected. We seek to provide a platform where people can share and surface content, messages and ideas freely, while still respecting the rights of others. When people can engage in meaningful conversations and exchanges with their friends, family and communities online, amazingly positive things can happen.

To facilitate this goal, we also work hard to make our platform a safe and respectful place for sharing and connection.  This requires us to make difficult decisions and balance concerns about free expression and community respect.  We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying).  A list of prohibited categories of content can be found in our Community Standards at www.facebook.com/communitystandards.

In addition, our Statement of Rights and Responsibilities (www.facebook.com/legal/terms) prohibits “hate speech.” While there is no universally accepted definition of hate speech, as a platform we define the term to mean direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease. We work hard to remove hate speech quickly, however there are instances of offensive content, including distasteful humor, that are not hate speech according to our definition. In these cases, we work to apply fair, thoughtful, and scalable policies. This approach allows us to continue defending the principles of freedom of self-expression on which Facebook is founded. We’ve also found that posting  insensitive or cruel content often results in many more people denouncing it than supporting it on Facebook. That being said, we realize that our defense of freedom of expression should never be interpreted as license to bully, harass, abuse or threaten violence. We are committed to working to ensure that this does not happen within the Facebook community. We believe that the steps outlined below will help us achieve this goal.

We’ve built industry leading technical and human systems to encourage people using Facebook to report violations of our terms and developed sophisticated tools to help our teams evaluate the reports we receive and make or escalate the difficult decisions about whether reported content is controversial, harmful or constitutes hate speech. As a result, we believe we are able to remove the vast majority of content that violates our standards, even as we scale those systems to cover our more than 1 billion users, and even as we seek to protect users from those who seek to circumvent our guidelines by reposting content that has been taken down time and time again. 

In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate. In some cases, content is not being removed as quickly as we want.  In other cases, content that should be removed has not been or has been evaluated using outdated criteria. We have been working over the past several months to improve our systems to respond to reports of violations, but the guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better – and we will.

As part of doing better, we will be taking the following steps, that we will begin rolling out immediately:

  • We will complete our review and update the guidelines that our User Operations team uses to evaluate reports of violations of our Community Standards around hate speech.  To ensure that these guidelines reflect best practices, we will solicit feedback from legal experts and others, including representatives of the women’s coalition and other groups that have historically faced discrimination.
  • We will update the training for the teams that review and evaluate reports of hateful speech or harmful content on Facebook. To ensure that our training is robust, we will work with legal experts and others, including members of the women’s coalition to identify resources or highlight areas of particular concern for inclusion in the training. 
  • We will increase the accountability of the creators of content that does not qualify as actionable hate speech but is cruel or insensitive by insisting that the authors stand behind the content they create.  A few months ago we began testing a new requirement that the creator of any content containing cruel and insensitive humor include his or her authentic identity for the content to remain on Facebook.  As a result, if an individual decides to publicly share cruel and insensitive content, users can hold the author accountable and directly object to the content. We will continue to develop this policy based on the results so far, which indicate that it is helping create a better environment for Facebook users.
  • We will establish more formal and direct lines of communications with representatives of groups working in this area, including women’s groups, to assure expedited treatment of content they believe violate our standards. We have invited representatives of the women Everyday Sexism to join the less formal communication channels Facebook has previously established with other groups.
  • We will encourage the Anti-Defamation League’s Anti-Cyberhate working group and other international working groups that we currently work with on these issues to include representatives of the women’s coalition to identify how to balance considerations of free expression, to undertake research on the effect of online hate speech on the online experiences of members of groups that have historically faced discrimination in society, and to evaluate progress on our collective objectives.

Browse Our Archives