EU AI rules trigger industry debate

The US Federal Trade Commission (FTC) has launched an inquiry into several major technology companies over concerns that their artificial intelligence (AI) chatbots may put children at risk. Alphabet (Google’s parent), Meta, OpenAI, Elon Musk’s xAI, Snap, and Character.AI have all been ordered to provide extensive information about how their chatbots work, how they are monitored, and what protections are in place for young users (particularly kids).

According to the agency, the investigation is meant to determine whether these products are exposing minors to harmful interactions as more teenagers turn to AI chatbots for conversation, advice, and companionship.

As part of the investigation, the FTC has demanded detailed records from the companies. These include information on how the chatbots are trained and tested, what steps are taken to block dangerous and inappropriate responses, and how harmful content is detected and removed. The agency is also focusing on data practices, asking whether children’s personal information is being collected, stored, or used to improve AI models. Additionally, the FTC wants to examine how the services are monetized, including whether companies design chatbots in ways that encourage emotional attachment to keep users engaged for longer periods.

The latest investigation follows growing public concern about the risks posed by AI chatbots that present themselves as digital ‘friends’ for young people. These systems can seem supportive and entertaining, but they have also been linked to troubling incidents. In some cases, chatbots engaged in romantic or sexual roleplay with underage users, while in others they produced harmful responses related to self-harm or even suicide. Several families have already taken legal action, claiming that chatbot interactions played a role in serious mental health issues, including the deaths of teenagers. For example, Microsoft-backed AI giant OpenAI has been accused of allowing ChatGPT to provide harmful guidance to a teenager who later died by suicide.

Similarly, Character.AI (another company under scrutiny) is also facing lawsuits tied to damaging chatbot conversations. Earlier, the social media giant Meta’s AI chatbot came under heavy criticism after reports surfaced that it had engaged in sexually explicit conversations with users (including minors), prompting serious safety and ethical concerns.

Meanwhile, in response to mounting pressure, some firms have already announced changes. For example, Meta has restricted its chatbots from discussing sensitive topics like suicide, eating disorders, or self-harm with teenagers, and it is also limiting which AI characters young users can access. The Sam Altman-led OpenAI has introduced parental controls and is developing features that would notify parents if a chatbot detects signs of distress in a child. But despite all this, the FTC has argued that voluntary measures may not be enough to fully protect children.

The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →