Facebook
Source: Thought Catalog

Major social media intermediaries have bowed down to the Centre and its new IT rules (Twitter has been slammed more than once for not complying with them). Recently, Google published its maiden transparency report in accordance with the Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021. Now it is Facebook’s turn, as the king of social media took down no less than 30 million pieces of content across 10 violation categories during the period of May 15-June 15, the social media giant said in its maiden monthly compliance report as mandated by the IT rules.

Additionally, Facebook-owned Instagram took action against about two million pieces across nine categories during the same period.

The new IT rules, which have been behind the feud between popular micro-blogging site Twitter and the ruling BJP government, mandates that digital platforms with over 5 million users will have to publish periodic compliance reports every month, mentioning the details of complaints received and action taken thereon. The report is to also includes the number of specific communication links or parts of information that the intermediary has removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools.

The categories under which the removed pieces of content fell are as follows – content related to spam (25 million), violent and graphic content (2.5 million), adult nudity and sexual activity (1.8 million), and hate speech (3, 11,000). The other categories were bullying and harassment (1, 18,000), suicide and self-injury (5, 89,000), dangerous organizations and individuals: terrorist propaganda (1, 06,000), and dangerous organizations and Individuals: organized hate (75,000).

As for Instagram, the categories under which the “actioned” content fell were –content related to suicide and self-injury (6,99,000), violent and graphic content (6,68,000), adult nudity and sexual activity (4, 90,000), and bullying and harassment (1,08,000). The other categories include hate speech (53,000), dangerous organizations and individuals: terrorist propaganda (5,800), and dangerous organizations and individuals: organized hate (6,200).

What does “actioned” content mean? It refers to the number of content, like posts, photos, videos, or comments, where action has been taken to violate standards. Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may be disturbing to some audiences with a warning.

According to a Facebook spokesperson, the company has consistently invested in technology, people, and processes over the years to further its agenda of keeping users safe and secure online and enabling them to express themselves freely on its platform. “We use a combination of artificial intelligence, reports from our community, and review by our teams to identify and review content against our policies. We’ll continue to add more information and build on these efforts towards transparency as we evolve this report,” the spokesperson said.

The next report will be published by Facebook on July 15, including data from WhatsApp and the details of user complaints received and action taken.

The company said that they expected to publish subsequent editions of the report with a lag of 30-45 days after the reporting period to allow sufficient time for data collection and validation, and would continue to bring more transparency to their work and include more information about their efforts in future reports.