In an interesting bit of news likely to get certain freedom of speech advocates in a twist — read: those who advocate for freedom of hate speech — Facebook has finally promised to block content that praises or supports white nationalism and separatism.
This protocol follows the tragic terrorist attacks on two mosques in Christchurch, New Zealand, which were live-streamed on Facebook by the attacker. Fifty were killed and 50 were wounded during the mass shooting. Since then, Facebook has been pressured to make a judgment call by anti-hate organizations, who demand clarity on how the platform will handle all forms of hate speech.
In the name of protecting First Amendment rights, Facebook has previously been criticized for not monitoring examples of white nationalist support. For example, Facebook failed to prevent the organization of 2017's Unite the Right rally, which led to the Charlottesville killing of Heather Heyer, an anti-hate protester. The social media platform did not remove the rally's event page until the day before, and as some activists claimed then and now, protections are arriving too little, too late.
Last year, a Motherboard investigation found that, though Facebook banned content comprising "white supremacy" on its platform, it allowed "white nationalism" and "white separatism." For instance, users were able to vocally support "white-only nations" on Facebook without consequence.
Naturally, it should go without saying that this all feels inherently contradictory, seeing as how "white supremacy," "white nationalism" and "white separatism" are all of the same racist coin, but okay — file it all under "white nonsense."
Nevertheless, Facebook says this policy change will help improve how it identifies and blocks content from hate groups. Any users searching out hate-related words and phrases will be intercepted by Life After Hate, which is a charity founded by former white supremacists that fights extremism and helps excommunicate themselves from racist groups. Instagram, which is also Facebook-owned, will follow the same policy.
In a post called "Standing Against Hate," Facebook noted that its previous policy had viewed white nationalism as acceptable ideals in the same ways as "American pride and Basque separatism, which are an important part of people's identity." The platform apparently consulted "members of civil society and academics" to gain deeper insight into how and why such a stance could be problematic, and of course, drew the conclusion many of us have always known: that white nationalism can't be "meaningfully separated" from white supremacy or organized hate groups.
Twitter set an example of what this can look like last September when they launched broader rules detailing their moderating policy, including the banning of "dehumanizing speech." It now says "language that makes someone less than human can have repercussions off the service, including normalizing serious violence."
Whether we consider all speech free or not, if it is intended to be harmful or incite hate, as it has and does throughout history, it must be taken seriously. Otherwise, we'll all just be shaking our heads solemnly in light of the next racially motivated tragedy.