This article was last updated 7 years ago

The very foundations of the UK as a country are being shaken by the threat of terrorism, the onslaught of which has become harsher in recent months. First the Westminster attack in March. And just a few days ago, the suicide bombing in Manchester. The online realm plays a significant role in the specifics of these acts of terror, with the perpetrator of the Westminster attack allegedly having sent out a WhatsApp message minutes before commencing the attack.

Understandably, therefore, world governments want social media platforms to hold more accountability, and work together with them to prevent further such escalations that could possibly be facilitated by use of social media.

To that end, the UK Prime Minister Theresa May convened a session on counter-terrorism at the G7 summit in Sicily on Friday, meeting with fellow world leaders to decide the best way forward in navigating everyday life with the threat of online extremism hanging by a single thread. May managed to secure a joint statement from the committee, calling for social media firms to “do more” to combat online extremism.

In fact, the government versus social media agency debate has been a long standing one: do you protect the rights and freedoms of the individual by not impeding his online activity? Or do you protect the State and its people as a whole, by subjecting social media agencies to disclose private information to the government?

As far as one Conservative minister is concerned, the answer clearly lies in the latter. He has suggested bringing in financial penalties, or even changing the law, in an attempt to encourage more action from tech companies on content that raises red flags online, promising these changes if the Conservatives gain a victory at the UK general election on June 8.

Their concerns come from an angle that goes beyond mere bullying and child safety issues, which one would consider the more general, everyday perils of the online realm. What has managed to stir up anxiety right now, is a new phenomenon altogether: social media platforms being used as tools to spread hate speech and extremist propaganda in Europe.

Earlier this year, Germany took prompt action against this rising threat, with the cabinet backing proposals to fine social media platforms up to €50 million, should they fail to remove illegal hate speech within 24 hours after a complaint regarding “obviously criminal content”, and within seven days for other illegal content.

If similar comments and promises made by leaders in the UK are to be believed, a Conservative-majority UK government imposing financial penalties in an attempt to enforce content moderation standards on social media could become a serious possibility.

These developments come after social media giants Facebook, YouTube and Twitter came under fire from a UK parliamentary committee report, published earlier this month, for their “laissez-faire approach” to moderating hate speech content. The committee also made the suggestion that imposing fines for content moderation failures, as well as a review of existing legislation, should be seriously considered.

Post the G7 session, May said in a statement:

We agreed a range of steps the G7 could take to strengthen its work with tech companies on this vital agenda. We want companies to develop tools to identify and remove harmful materials automatically.

Although the exact steps remain unspecified, it is clear that levying financial punishment is being thought of as a viable option by many G7 nations. Tech firms, meanwhile, are already using and developing tools to automate flagging up problem content, as well as to leverage AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.