OpenAI has announced the formation of a new Expert Council on Well-Being and AI to strengthen its safety practices and guide how its artificial intelligence (AI) systems interact with people in emotionally sensitive situations. The eight-member council includes psychologists, psychiatrists, and researchers specializing in digital mental health and humanâcomputer interaction. Their job is to advise the ChatGPT maker on designing safer responses, especially when users express distress, discuss mental health issues, or form emotional connections with AI systems.
Notably, the creation of this council comes amid increasing regulatory and public pressure. In recent months, the AI trendsetter has faced intense scrutiny following lawsuits and an ongoing FTC inquiry into how AI chatbots affect children and teens. Regulators have demanded details about how companies like OpenAI monitor harmful content, protect minors, and manage emotional risks.
According to the AI firm, the council will meet regularly, providing structured feedback and evaluation of the companyâs design decisions, particularly in sensitive areas like mental health, relationships, and user distress.
The timing of this move becomes even more noteworthy, as the ChatGPT maker is also preparing for a major policy shift. Beginning in December 2025, OpenAI will allow verified adult users to access and create erotic content through ChatGPT. Actually, the company’s CEO, Sam Altman, described the change as part of a new philosophy to ‘treat adult users like adults’.
According to Altman, the company now believes its systems (and the human oversight around them) are mature enough to support erotic expression responsibly. The AI firm also claims to have developed new tools to recognize emotional signals, detect distress, and distinguish between healthy adult interactions and harmful or nonconsensual content.
“Almost all users can use ChatGPT. However, they’d like without negative effects; for a very small percentage of users in mentally fragile states there can be serious problems. 0.1% of a billion users is still a million people. We needed (and will continue to need) to learn how to protect those users, and then with enhanced tools for that, adults that are not at risk of serious harm (mental health breakdowns, suicide, etc) should have a great deal of freedom in how they use ChatGPT,” Sam Altman said in a post.
These announcements from the AI giant come at a time when the company is already facing intense scrutiny over its handling of sensitive content and user safety. In recent months, OpenAI has experienced some high-profile policy controversies, including the August 2025 lawsuit filed by the parents of a 16-year-old who alleged that ChatGPT contributed to their sonâs suicide by providing instructions on self-harm and helping him draft a suicide note. The case has intensified questions about the effectiveness of the companyâs safety standards. At the same time, Elon Muskâs xAI filed a case alleging trade secret theft, and Musk has also previously challenged OpenAI over its business practices.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure â