Meta has announced new parental controls for Instagram that will give parents more oversight of how teenagers interact with AI chatbots on the platform. The update comes after deepening concerns that some of Meta’s AI features, including chatbots with distinct personalities, were engaging in inappropriate conversations with young users. Starting early next year, parents will be able to manage or restrict their teen’s use of these AI tools, including the option to block private chats with AI characters altogether.
With the new update, parents will be able to completely disable one-on-one conversations between their child and any of Meta’s AI chatbots. These characters (many of which are designed to mimic different personalities, including celebrities) will only be available if parents allow them. Teens will still have access to Meta’s main AI assistant, which the company says will operate under stricter, age-appropriate rules. Parents can also choose to block individual AI characters rather than turning off all chatbot interactions at once, allowing more flexibility in how these controls are used.
Another feature will let parents see summaries of what kinds of topics their teens are discussing with AI, without showing full chat transcripts. For example, a parent might see that their teen has been asking the chatbot about schoolwork or emotional topics, but not the full details of the exchange. The company believes this balance will encourage open conversations about online safety between parents and their children.
Along with these updates, the Mark Zuckerberg-led company has been introducing several additional safety measures on Instagram in recent months. For example, all teen accounts will now default to a ‘PG-13’ content setting, which limits exposure to explicit material like sexual content, graphic violence, and posts related to drugs. Teens who wish to change this setting will need approval from a parent. Meta has also said it is using AI-powered systems to detect when users attempt to misrepresent their age.
The move comes at a time when Meta is already dealing with controversies and legal challenges related to its AI efforts. The company is also facing a growing financial burden from its heavy investments in artificial intelligence, having forecast its annual capital expenditure to range between $66 billion and $72 billion. In parallel, in April 2025, Meta’s AI chatbot faced major criticism after reports claimed it had exchanged sexually explicit messages with some users, including minors. Even the US Federal Trade Commission (FTC) opened an investigation into the company over similar concerns. Earlier this month, the company also faced intense backlash after announcing that user AI chat data would be used to personalize ads and feeds across its platforms. Actually, users can not opt out of this data use, although they can choose not to use Meta AI products.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →