OpenAI disbands the team responsible for long-term AI Safety
OpenAI has disbanded its Super Alignment team focused on long-term AI risks, just a year after its formation. According to media reports, some team members are being reassigned within the company.
This development follows the recent departures of team leaders Ilya Sutskever and Jan Leike from the Microsoft-backed startup. Leike criticised OpenAI’s shifting priorities, stating that “safety culture and processes have taken a backseat to shiny products.”
The Super Alignment team, announced last year, aimed to make breakthroughs to control highly advanced AI systems. OpenAI had pledged 20% of its computing power to this initiative over four years. OpenAI did not comment directly, but CEO Sam Altman expressed regret over Leike’s departure on social media.
Sutskever and Leike announced their exits on social media, with Leike later elaborating on his disagreements with OpenAI’s leadership over safety priorities.
Leike emphasised the need for a safety-first approach in AGI development, citing increasing challenges in securing resources for crucial research. He expressed concerns about the company’s trajectory and called for more focus on security and societal impact.
These departures come after a recent leadership crisis involving Altman, who was temporarily ousted by the Board last November but reinstated following significant internal and external backlash. Amid these changes, OpenAI launched new AI models and updates to ChatGPT, continuing its technological advancements.