Barely days after its co-founder decided to step down, OpenAI has now made headlines following the dissolution of its Superalignment team. According to media reports, the AI research firm has opted to integrate the team’s responsibilities into its broader research endeavors – which means that they have either been dissolved or integrated into other teams and groups.
Established in July 2023, the Superalignment team was tasked with addressing a hypothetical but potentially existential threat: the emergence of AI surpassing human intelligence. With the ability to learn and act independently at a superhuman level, such “superintelligent” AI systems could pose a grave risk to humanity if not carefully controlled. The Superalignment team’s mission was to develop methods for aligning the goals of such powerful AI with human values (essentially ensuring their safety and usefulness to humanity). At that time, the firm said that the team will focus on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”
For its part, OpenAI had committed a decent chunk of its resources to this endeavor, pledging 20% of its computing power to the initiative over a period of four years. Nonetheless, throughout its tenure, the Superalignment team encountered various obstacles, including resource constraints and logistical hurdles. This came despite OpenAI’s initial pledge to allocate a significant portion of its computational resources to support the team’s efforts – as mentioned earlier – and practical challenges persisted.
The disbandment of the Superalignment team coincides with the departure of its founders, Ilya Sutskever and Jan Leike, both esteemed figures within OpenAI. Sutskever, serving as the company’s chief scientist, and Leike, a prominent researcher, played instrumental roles in shaping the organization’s AI safety initiatives. Leike’s resignation message posted on X, a social media platform, paints a concerning picture of internal conflict at OpenAI. Leike expressed frustration over a perceived shift in priorities, alleging that “safety culture and processes have taken a backseat to shiny products.” This raises questions about whether OpenAI is adequately prioritizing safety research in its race to achieve artificial general intelligence (AGI).
I joined because I thought OpenAI would be the best place in the world to do this research.
However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.
— Jan Leike (@janleike) May 17, 2024
While the Superalignment team has been disbanded, OpenAI maintains that research on AI safety risks will continue. Responsibility for this area will now fall to John Schulman, co-leader of the team focused on fine-tuning AI models after training. However, it remains unclear whether this integration will effectively address the concerns raised by Leike and others.