Instagram
Credits: Wikimedia Commons

If you logged into Instagram recently, then you may have noticed that your feed was populated with explicit and violent material. Some of the content included graphic injuries, depictions of dismemberment, and even violent acts such as shootings and assaults. This was particularly concerning as a lot of users have their “Sensitive Content Control” enabled at its highest setting, which is designed to filter out such content. Now, Meta has addressed and resolved the issue.

A Meta spokesperson spoke about the problem (which led to disturbing content being recommended to users despite content moderation settings), revealing, “We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake.” The reasons behind the error were not revealed.

For its part, the social media giant has long maintained policies that prohibit content depicting extreme violence, gore, and other disturbing imagery. According to the company’s guidelines, material featuring dismemberment, charred bodies, or explicit depictions of suffering is typically removed from the platform. However, there are exceptions for content that raises awareness about significant issues such as human rights violations, conflict zones, and acts of terrorism. In these cases, warning labels are often applied to alert users before they view the material.

In this particular instance, however, Meta’s filtering systems appeared to have failed. Reports indicate that users who had previously never engaged with violent or explicit content were still shown disturbing videos in their Reels feed. Some users attempted to manually adjust their settings or mark videos as “Not Interested,” but the problematic recommendations persisted. CNBC and other media outlets also reported directly witnessing these graphic videos in Instagram Reels. Among the content that appeared were videos of people being run over by vehicles, fatal shootings, and other forms of violence, some of which were labeled as “Sensitive Content” rather than being removed entirely.

Unsurprisingly, the widespread appearance of violent content triggered a strong backlash on social media. Many users took to platforms such as Reddit and X (formerly Twitter) to express their concerns, with some reporting that their feeds had been overwhelmed by disturbing material. One Reddit user claimed their Reels feed was “inundated with school shootings, stabbings, beheadings, and other horrifying footage,” while others mentioned that they saw uncensored adult content in addition to violent videos. User concerns are obvious, and while Instagram has certain restrictions in place for underage users, the unexpected influx of graphic content may have inadvertently exposed minors to highly inappropriate material. Meta did not provide details on how many users were affected by the glitch, nor did it clarify whether any specific demographics were more impacted than others.

This incident comes on the heels of the social media company announcing that it is bringing its third-party fact-checking in the US to an end; instead, it will be replaced with a Community Notes system. This system is modeled after a similar feature on X, where users contribute context to potentially misleading content rather than relying on traditional fact-checkers. Critics argue that this move, combined with Meta’s broader push for looser content moderation policies, may have contributed to the failure of Instagram’s content recommendation algorithms.