With continued global outrage on the adverse impact that social media is having on younger generation, Meta is now introducing additional safety measures on Instagram, specifically for its teenage users. With the latest update, all accounts belonging to users under 18 (Teen Accounts) will now be automatically guided by a PG-13-style content framework. This means that posts containing strong language, sexual content, references to drugs, or risky stunts will generally be hidden or less likely to appear in their feed. Even accounts that frequently share material considered inappropriate for teens will be blocked from interacting with these users.
Along with these content changes, Instagram is rolling out new parental control features. Now parents can activate a ‘Limited Content’ mode, which further restricts the posts their teens can see and limits certain interactions (including commenting or viewing comments) on posts. This mode also restricts the use of AI features within the app, ensuring that teens’ online interactions remain appropriate. According to the social media giant, these updates were introduced based on feedback from parents worldwide, who helped evaluate millions of posts to determine what content should be considered age-appropriate for teenagers.
The new measures are being introduced first in the United States (US), the United Kingdom (UK), Australia, and Canada, with plans for a broader global rollout in 2026. Meta has also indicated that similar protections will eventually be extended to Facebook accounts for teenage users.
The latest development comes as the Mark Zuckerberg-led company continues to face growing concern from mental health advocates and parents over the potential negative effects of its social media platforms on teenagers. For example, in 2023, a group of 33 US states filed a federal lawsuit against Meta, accusing the company of creating addictive features on its platforms that take advantage of the vulnerabilities of children and teenagers. And in response to these challenges, the company introduced several measures to improve teen safety, including enhanced parental controls and PG-13-style settings for Teen Accounts on its major social platforms.
However, despite these efforts, the social media giant continues to face legal scrutiny. In October 2025, New York City filed a 327-page federal lawsuit against Meta and other major social media companies, claiming that their platforms encourage addictive use among children and contribute to a youth mental health crisis. Even the company’s AI initiatives have faced similar challenges. In April 2025, Meta’s AI chatbot came under heavy criticism after reports emerged that it had engaged in sexually explicit conversations with users, including minors. Recently, the US Federal Trade Commission (FTC) launched an investigation into the company over a similar issue.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →