Facebook has once again become a victim of an investigation conducted by a British publication. And the results of the same has now accused the social media giant of failing to remove offensive content promoting terrorism and child pornography even after repeated complaints. This development is of critical importance for the platform as it could be at risk of being prosecuted in the U.K.
According to the results of an independent investigation by The Times, Facebook still continues to find it difficult to tighten the lid on the spread of illegal and extremist content on its platform. One of the reporters at the British newspaper created an anonymous bogus profile, pretending to be an IT professional in his thirties. He then began adding heaps of ISIS supporters to his new profile, while also joining groups that promoted child pornography. To this, he added,
It did not take long to come across dozens of objectionable images posted by a mix of jihadists and those with a sexual interest in children.
He discovered multiple pieces of evidence in the form of content which should’ve been taken down and prevented from being spread across networks. The illegal content found on the social network included —
an Islamic State beheading video made by an ISIS supporter, several violent and paedophilic cartoons, a video of on a child apparently being sexually assaulted and propaganda posters celebrating the recent terrorist attacks against Christians in London and Egypt (where 91 individuals were killed).
In addition, the reporter also found instances where Facebook’s algorithmic timeline had started promoting some of this extremist material. It was also inviting the users who viewed such content (or interacted with it) to join certain groups and profiles which had published the content on the platform. Though the algorithms help you improve upon your Feed experience, but it is also one of the drawbacks as there is no control over the content being propagated. Due to the same, the AI supposedly encourages the spread of an illegal piece of content
The publication further continues to mention that repeated attempts of reporting the aforementioned content resulted in failure as the moderators failed to acknowledge the nature of the content. The moderators apparently told the reporter, who is currently using the fake profile, that the imagery and videos did not violate Facebook’s community standards.
But, an immediate action was taken by the moderators when the reporter identified himself and some of the paedophilic cartoons were removed. But, the videos or imagery with pro-jihadist messages were left untouched. And all the content in question has now been removed from the platform. Commenting on the situation, Facebook Vice President of Operations Justin Osofsky said (via Reuters),
We are grateful to The Times for bringing this content to our attention. We have removed all of these images, which violate our policies and have no place on Facebook.
We are sorry that this occurred. It is clear that we can do better, and we’ll continue to work hard to live up to the high standards people rightly expect of Facebook.
Though Facebook may have acknowledged the platform’s fault and vowed to perform better the next time around, evidence for its delayed response has already reached U.K Queen Counsel. Julian Knowles has viewed all the material and believes that the content lies beyond the legal boundaries. It is potentially said to breach the country’s Terrorism Act 2006, which prevents publication and speech that encourages terrorism.
This is, however, not the first time Facebook is facing flak for failing to remove child exploitation imagery, which is also said to build upon a previous investigation of this kind — which are all conducted by British publications. BBC used the report button provided by Facebook to report as many as 100 lewd images which were against the company’s policy on what is and what isn’t permitted on the social networking platform. However, they re-appear on the platform, reeking havoc and possible lawsuits against the web site.
Further, we had recently reported that the U.K government is urging internet giants to do more, such as develop new tools, to tackle the spread of extremist content online. The companies attending the meet included Google, Microsoft, Twitter, and Facebook, along with some smaller Internet companies. In the wake of recent terrorist attacks, the government is pressuring these giants to buckle up and automate the reporting process. They’ve also been asked to crack down hard on the spread of terrorist and extremist content. It did debut new tools, including AI-powered photo matching technologies, to tackle revenge porn on the platform earlier last week.
It is also considering the introduction of a new law to prosecute such Internet companies in the event that terrorist content is not immediately taken down after it is has been reported. The proposal has already been presented to the ministers and it is expected to be adopted in this legislative period. However, it brought up the question from many ministers of how such a law could be enforced with overseas headquartered companies, like most of the Internet companies in question are.
We’ve contacted Facebook for additional information and will you once we hear back from them. But, do you think Facebook still has a long way to go in refining its News Feed standards? Comment your thoughts down below.