This article was last updated 8 years ago

wannacry, iOS spyware attack, wannacry

Some of the biggest video content-generating websites, like Facebook and Alphabet-owned YouTube have quietly started automating the filtering process for extremist content on their websites.

Social media giants and content-generating websites are often rebuked of being unable to act properly, when it comes to content generated by our within terrorist groups. They are at a total loss when someone accuses them of intentionally helping the extremist groups spread their political agenda. Sometimes the government also try to force their censorship rules and pressurize them to remove content(which sometimes doesn’t even help) as the attacks by extremists proliferate.

And so, this automation counts as a major step for the Internet companies who are eager to eradicate the spread of extremist propaganda on their websites. If you take into account the attacks on Syria, Belgium or even the recent attacks on Paris, social media and content-generation and proliferation played a key role in its deployment. As reported, the terrorist organizations were using Twitter to communicate and YouTube to convey their messages.

On similar grounds, leading social media networks including Facebook, Twitter and YouTube are currently being sued by the father of a deceased in the Paris attacks. He has blamed the Internet giants for aiding and abetting the terrorists in their agenda, which isn’t entirely true. They cannot keep a lookout for each and every violent post on their websites, as they receive millions of hits and posts each day.

The Internet-companies might not be completely transparent, but are now more actively trying to eradicate violent content from their websites. They are deploying the tech that was developed to identify and remove copyrighted content from the website. For example, YouTube automatically assigns a unique digital ‘hash’ fingerprint to each video uploaded on their platform. It then checks all the content that matches the existing fingerprint and removes it from the website rapidly.

According to sources close to Reuters,

YouTube and Facebook are among the sites deploying systems to block or rapidly take down Islamic State videos and other similar material.

This tech will now help catch attempts of content reposts already identified by the database as unacceptable or unethical. It would then automatically block those videos.

A push for censorship

Amid growing violence and pressure from governments all over the world, companies are now concerned about Internet radicalization. The private non-profit Counter Extremism Project(CEP) wants content-generating websites to include a content-blocking system regulated by an outside authority.

Facebook, YouTube, Twitter and CloudFlare were among a few who attended a meeting to discuss the above mentioned proposition. But, all of them expressed wariness in letting an outsider decide what’s acceptable on the platform or not.

No internet giant until now has confirmed involvement with the CEP. Facebook in its statement, however has added that,

[They are] exploring with others in industry, ways we can collaboratively work to remove content that violates our policies against terrorism.

A Twitter spokesman also said that the company was still evaluating the CEP’s proposal and had not yet taken a position.

So, the companies are quietly pushing their own agenda’s to stop the spread of violence and extremist ideas on their platform. And they are also not planning to discuss the methodology or techniques they are using to ban the uploaded content. But, people close to the development believe that it is similar to the ‘hash’ tech discussed above. It will check for extremist content against a database of banned videos, including beheading or lectures inciting violence.

There is also no clarity on how much human involvement goes into reviewing the videos to be banned or how the existing database of videos was garnered in the first place. But, the tech is probably going to evolve as the companies continue to discuss the matter internally and internationally.

Why so secretive!?

Most of the Internet-based content websites have relied on flagged content to review the video and remove it if deemed violent or inappropriate for existing on the platform. And many still continue to do so.

But, the companies are now moving towards an harsher automated process, that will remove videos immediately after upload. There is no lag of flagging or reviewing the content and that’s part of the reason why they don’t want to discuss about it out loud. They are partaking in censorship and in brief taking away part of your freedom to upload any content.

Also, they fear that if they talk about the automation tech being used to block content, then terrorist groups may manipulate their systems and find a way around the censorship.

This is still an evolving industry-practice that will soon be adopted by most of the Internet based companies. The Counter Extremism Project also publicly described its content-blocking system for the first time last week and is now urging the big Internet companies to adopt it.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.