This article was last updated 8 years ago

facebook, Zuckerberg

Over the past few months, Facebook’s content moderation policies have been called into question and correcting the same has now become one of its utmost priorities. While the social networking giant may have given us a peek at some initiatives it is running to protect the spread of misleading and graphic content, there’s still a lot under the covers to be discovered. And we have today gained access to these internal content moderation documents.

The Guardian hit the motherload when the publication got its hands on more than 100 documents, containing everything ranging from internal training manuals to spreadsheets and flowcharts. It provides us a first-hand look at the blueprints and algorithms being employed to moderate content related to violence, hate speech, terrorism, pornography, racism, and self-harm. The said guidelines define what we can post on the website and how content moderators need to act upon reported content — only within 10 seconds due to the volume of work.

Speaking on the same, an anonymous source says:

Facebook cannot keep control of its content. It has grown too big, too quickly.

But, several reports floating around the interwebs seem to suggest that Facebook could possibly face instant backlash for content moderation policies related to threats of violence and other graphic content. The process of building a free platform, which is now focused on building communities, isn’t exactly the simplest one to tread. It is utilizing AI-powered automated systems to weed out content, rest of the doubtful part is left for the moderators to judge and categorize as safe (or not).

The social media giant is looking to build a close-knitted community but is caught in between protecting its user’s privacy and keep the freedom of speech intact. This can be described using the fact that Facebook enables the general use of some violent statements until and unless it induces a credible threat towards an individual or group. It is one of the controversial guidelines and means that you can easily post statements like — ‘Little girl needs to keep to herself before daddy breaks her face’ and ‘I hope someone kills you’ to vent your frustration online, as described in the documents. The same reads as under:

people commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways.

While political threats are still a huge no, Facebook believes that the aforementioned statements do not mean harm or violate their guidelines. It’s because they’re being said in an unserious way and a disturbing tone wouldn’t exactly suggest the person means you harm. The moderators, which is a huge 3000 in number, have to judge the emotions behind such statements. They will then have to characterize them as safe to exist on the platform or not.

These statements fall face first into the company’s attempts to curb cases of suicides and spread of hate speech. Online abuse and mischaracterization are one of the notable reasons for an individual to spiral down and take an unprecedented path. Facebook, in today’s age, is helping shape the mindset of individuals and it is allowing graphic videos of terrorism and self-harm to exist on the platform to shed light on these issues. Some of this content is even visible to minors and has been described in the documents as under:

Videos of violent deaths are disturbing but can help create awareness. For videos, we think minors need protection and adults need a choice. We mark as ‘disturbing’ videos of the violent deaths of humans.

Further, one of the leaked documents sheds light on how Facebook handles content related to child abuse and pornography. This may tick you off the radar as the social media giant may have outlined its plan to crack down on such content, including the rising problem of revenge porn, but it does nothing until you take a step forward.

Facebook automatically removes photos shared with a motive of sadism and celebration but all other content needs to be flagged for its moderators to double-check the same. Facebook has confirmed that there are ‘some situations where we do allow images of non-sexual abuse of a child for the purpose of helping the child.’

We’ve contacted Facebook for more information on these leaked documents and will update you once we hear back from them.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.