This article was last updated 4 years ago

facebook logo

Facebook published a series of posts on Thursday, detailing, among other things, the platform’s progress in identifying hate speech and misinformation with the advances in its AI technology. All these posts appear to be supplements to Facebook’s Community Standards Enforcement Report, November 2020.

Facebook publishes Community Standards Enforcement Report every quarter, detailing the statistics, progress and its actions regarding the platform’s community standards and community guidelines violations. The report also includes data on Instagram; and since Instagram is also owned by Facebook, the guidelines and policies apply to both platforms.

For the first time, the social media giant has included the data for prevalence of hate speech on Facebook globally in its report. According to Facebook, the hate speech prevalence in Q3 2020 was 0.10% – 0.11%, which would be about 1 view for every 1000 views of Facebook content. But the company says that they have made some “real” progress in how they detect hate speech with the help of their AI technology.

To enforce standards and guidelines, the first step would be to detect violations. Facebook’s approach to this is twofold: AI and user reports. Users can manually report a post which they think violates Facebook’s or Instagram’s rules. AI scans both new posts and reported posts, and removes or flags content automatically. Depending on the complexity of the task at hand, the content can also be sent to human moderators whom Facebook employs through third party contractors all over the globe.

User reports and human moderators can only do so much on a platform which has over 2.7 billion active monthly users, so the AI has to do most of the work to provide enforcement at scale. According to Facebook’s quarterly report, its AI technology now proactively detects 94.7 percent of hate speech that is removed by Facebook, which is up from 80.5 percent a year ago and up from just 24 percent in 2017.

The numbers are indeed impressive, especially when you consider how tricky determining what hate speech actually is, but it also raises concerns of false-positives. Sometimes, more difficult than detecting hate speech, it is to define it. Facebook says that it relies on global experts and stakeholders to create policies and rules on its platform. The platform loosely defines hate speech as “a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability.”

Even when you have the definition, developing AI which can detect it can be a much bigger challenge. Facebook said that the progress was made possible by the advances in its automated detection tools which help train AI language systems, and also the newly introduced systems like Reinforced Integrity Optimizer (RIO) and Linformer AI architecture. All of this helps Facebook develop AI with greater language understanding capabilities.

The same AI developments, along with the improvements to SimSearchNet, which is an image-matching tool, and a new deepfake detection tool, were also used in the detection of misinformation which Facebook has detailed in a separate post.

Facebook introduced SimSearchNet++, which played a huge role in detecting misinformation. “Facebook has deployed SimSearchNet++, an improved image matching model that is trained using self-supervised learning to match variations of an image with a very high degree of precision and improved recall,” Facebook wrote in the post.

The company says that it works with more than 60 fact-checking organizations all over the world, and combines the manpower with its AI technology to combat misinformation at scale.

According to Facebook’s Q3 enforcement report, the company has taken action on 22.1 million pieces of hate speech content on Facebook, about 95% of which was proactively identified; and 6.5 million pieces of hate speech content (up from 3.2 million in Q2) on Instagram, about 95% of which was proactively identified (up from about 85% in Q2).

The company has also taken down 19.2 million pieces of violent and graphic content and 3.5 million pieces of bullying and harassment content on Facebook, and relatively lower numbers on Instagram with 4.1 million and 2.6 million, respectively.

Although the improvements and detection rates are impressive, the idea of AI doing majority of the detection and enforcement work on the platform raises concerns about posts and accounts being taken down due to false-positives, as AI is susceptible to error.

The company has acknowledged that their AI technology is still not perfect and requires further development. In worst case scenarios, users whose content was removed and whose Facebook appeals have been exhausted, can still appeal to the ‘Oversight Board’ to express their disagreement and request re-evaluation of the platform’s decision.