Facebook, applications

Technology giants like Facebook, Microsoft and Google have often come under microscopic inspection for aiding the spread of ‘terrorist propaganda’ on their platforms. They’ve even been sued by certain pained individuals hurt with the ‘material’ involvement of social platforms in promoting terrorism, which is in violation of the law.

Thus, some of the largest technology behemoths including Facebook, Twitter, Microsoft, and YouTube are now banding together to combat the propagation of ‘terrorist content’ on their consumer-facing social services. The companies have today in an official blog post mentioned that they’ll now create a shared industry database of ‘hashes’ or ‘digital fingerprint’ for the removal of terrorist content and imagery from their platforms. The primary aim of this move being:

By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms.

This collaboration is a result of the said companies’ insistent meetings with the European Union officials over drafting policies to curb hate speech and also do something the continuous spread of terrorist propaganda. They’ve also started putting pressure on these technology giants to stop extremists from using their services to spread their messages, reports WSJ.

But how will this singular database actually operate? The blog post states that the recognition and removal of terrorist content will be handled by computer algorithms for each company. But the hashes of most extreme and egregious terrorist images and videos which have been removed from each company’s service might differ based on their content policies.

Each piece of content will be assigned a unique identifier which can match to similar content on another platform but they’ll be merged to produce the same hash value. This is exactly same to the copyright-detecting algorithms used by YouTube and other giants to weed out such content.

Each participating tech giant will contribute their hashes to the database but the removal of the same wouldn’t be automatic. If any terrorist propaganda is found to be match something in the database then the individual participant will decide what and when to remove based on how they define such content.

As for those worried about privacy, the blog post states:

No personally identifiable information will be shared, and matching content will not be automatically removed. Each company will continue to apply its practice of transparency and review for any government requests, as well as retain its own appeal process for removal decisions and grievances.

This shared database, which will initially be populated by removed terrorist content from Facebook’s arsenal of platforms but the same will be updated to include new terrorist images or videos. They can then be hashed and added to this source, available for access to other companies as well. It finally states:

We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.