This article was last updated 5 years ago

Facebook on Monday released a report detailing the usage of AI, human fact-checkers and moderators to enforce its community regulations. This time round the report known as the Community Standards Enforcement Report, predominantly focused on Facebook’s AI upgrades that will help the platform regulate covid-19 related misinformation and hateful speech disguised as memes.

The social media giant claims to have put warning labels on 50 million posts related to coronavirus on the platform. While texts and article links containing falsified information are doing the rounds online, Facebook has observed that a substantial amount of misinformation is also in the form of photos or videos.

In the case of images, the variability of a few pixels could deliver a polar opposite message; identical images may be accompanied by texts that have completely contrasting implications. A computer algorithm could either detect the minute differences between these images or pass them over as identical. Over the last two and a half years, Facebook has been working towards countering instances of the latter on its platform. Facebook’s multiyear efforts across many divisions of the company have given way to SimSearchNet, a piece of analysis infrastructure which will aid detailed inspections of near-duplicate images.

In the case of covid-19, the system aids the company to locate similar images that may be delivering the same piece of falsifies information. “Once independent fact-checkers have determined that an image contains misleading or false claims about coronavirus, SimSearchNet as a part of our end-to-end image indexing and matching system, is able to recognize near-duplicate images so we can apply warning labels” said the company.
Capable of detecting the slightest variants, SimSearchNet helps the company regulate Facebook Marketplace where people trying to find a way around the rules have been uploading re-touched versions of items on sale.

The next issue that the platform is looking to counter is hateful speech disguised as memes. As in the case of images, memes that imply hateful messages are tricky to identify. These posts usually make sense only through interplay of text and imagery while they separately may be unproblematic. This relation that builds context for hate speech is not easily detected through AI. However Facebook has arrived at a technique entailing a phased set of steps. Firstly, human-checkers label a large set of memes as hateful or not. Next, a machine learning system is fed this information but in a slightly different manner. As a result, instead of analysing images and texts separately and then relating the two, Facebook’s system combines the two from the beginning. The mechanics follow human perception, consuming all components of the piece of media at once. Facebook says that the accuracy of the designed algorithm currently lies at about 65-70%.

Facebook had announced its plans to rely on AI for moderation in the early days of the covid-19 crisis. The platform had voiced concerns about “false positives” or content being flagged when it shouldn’t be. While increased moderation may be creating problems, against the background of a pandemic, hate speech, violent threat and misinformation continue to do rounds on the platform. The company recently faced scrutiny for a video on the platform that discouraged the use of masks or seeking of vaccine when it is available.