When you own, control and operate over three quarters of the world social media traffic, you are obviously prone to massive spread of fake information. That is exactly what happens on Facebook and its owned platforms. And a large portion of those comes from fake accounts, created for the specific purpose of spreading such information.

The company has hence announced new figures on its actions taken towards blocking spread of fake content. And such are the numbers, that calling them staggering will be an understatement.

In its content moderation report released this Wednesday, Facebook has revealed that they have taken down a humongous 3.2 billion fake accounts from their platform in the period of April to September this year. To put it in perspective, the number is half of living human beings on planet earth. The company has also removed millions of posts pertaining to child abuse and suicide.

The social media giant removed 1.55 billion accounts last year in the same time frame, and today, when they declared a count almost double to last year’s, it was pretty clear that this menace of misinformation is only rising. But what can also be agreed to, is the fact that Facebook is getting more serious about the authenticity and decorum of the platform.

The Melno Park behemoth also revealed, for the very first time, the count of posts it has removed from its very successful image sharing platform, Instagram. Off late, Instagram has been increasingly targeted by fake information broadcasters. The platform has been identified as a new breading ground for fake news by many researchers.

In its fourth content moderation update, the company also disclosed that the efforts of detecting violating contents were low on Instagram when compared to Facebook’s original app, where they have been detecting such contents for a some time now.

The stretch of fake/concerning content could be estimated by the figures provided by the company. Facebook says that the rate at which it detects and removes content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%. Additionally, the rate at which the company proactively detected content affiliated with any terrorist organization on Facebook has reached 98.5% and on Instagram is 92.2%.

Facebook has removed 11.6 million contents related to child nudity and sexual exploitation from its core app, and 754,000 of same content from Instagram in the third quarter.

In addition, Facebook said it removed 11.4 million instances of hate speech during the period, up from 7.5 million in the previous six months. The company said it is beginning to remove hate speech proactively, the way it does with some extremist content, child-exploitation and other material.

Another first timer in the report, are figures on posts related to suicide and related self harming content. Facebook says, that it took action on about 2 million pieces of self harming content in Q2 2019, of which 96.1% were detected proactively, and it saw further progress in Q3 when the company removed 2.5 million pieces of content, of which 97.3% was detected proactively.

On Instagram, similar progress was there to be seen. Facebook removed about 835,000 pieces of content in Q2 2019, of which 77.8% was detected proactively, and the company removed about 845,000 pieces of content in Q3 2019, of which 79.1% were detected proactively.

They have also removed 4.4 million content related to drug sales in the same quarter.