Image: Israeli Defence Forces/ Flickr

Amid warning signs from government officials, and general public alike, Meta is finally taking steps to combat misinformation on the devastating Israel-Hamas war. Amid the ruinous conflict, Meta, the parent company overseeing the vast social media landscape of Facebook, Instagram, Whatsapp and Threads, is finally taking proactive stance to stem the proliferation of disinformation.

A pivotal element of Meta’s response revolves around its “special operations centre,” a task force staffed with experts proficient in both Hebrew and Arabic, which operates in real time to monitor, assess, and rapidly respond to the evolving situation. This specialised team serves as a dynamic line of defence, quickly removing content that violates the company’s Community Standards and Community Guidelines, thereby reducing the risk of misinformation disseminating further.

In its official announcement, Meta disclosed that it removed or flagged more than 795,000 pieces of content in Hebrew and Arabic within three days following the Hamas terrorist attack on Israel. Moreover, the platform has expanded its violence and incitement policy temporarily, particularly to prioritise the safety of individuals taken hostage by Hamas. This pivotal move emphasises Meta’s commitment to protecting the privacy of these victims, even when information is shared with the best of intentions.

Meta’s strategy involved reducing potential spread of borderline or violating content through the adjustment of its content recommendation algorithms. The platform is lowering the threshold for its technology to take action, minimising the inadvertent amplification of potentially harmful narratives across its suite of platforms.

While Meta is sparing no effort to counter misinformation, the European Union (EU) has been actively pressing social media companies to combat false narratives and illegal content. This comes after Commissioner Thierry Breton issued a warning to Mark Zuckerberg, CEO of Meta, and Elon Musk, owner of X (previously Twitter), regarding the spread of misleading and harmful content related to the Israel-Hamas conflict. Breton emphasised the necessity for platforms to comply with the EU’s Digital Services Act, which mandates the removal of illegal and harmful content and can result in substantial fines for non-compliance.