Safety people who were fired in 2024
- Leopold Aschenbrenner and Pavel Izmailov (by Apr 11)
Safety people who left largely for safety or integrity reasons, or because OpenAI prevented them from doing valuable work, in 2024
- William Saunders (Feb 15): departure announcement (LessWrong), senate testimony, interview (Business Insider), interview (TIME)
- Daniel Kokotajlo (by Apr 17): departure announcement (LessWrong), Fortune, Vox, NYT
- Jan Leike (Superalignment co-lead) (May 16): departure announcement (X)
- Gretchen Krueger (May 14) departure announcement (X)
- Carroll Wainwright (May 31): departure announcement (X), claim that OpenAI is untrustworthy (X)
- Todor Markov (Jun 5): claim that OpenAI is untrustworthy (reported, NYT); claim that OpenAI is untrustworthy (X)
- Richard Ngo (Nov 15): departure announcement (X); claim that OpenAI is untrustworthy (LessWrong)
- Rosie Campbell (Nov 29): departure announcement (X), departure announcement (Substack)
Safety people who left for ambiguous reasons in 2024
- Ilya Sutskever (cofounder + board member + chief scientist + Superalignment co-lead) (May 14): departure announcement (X), board crisis reporting
[*] - Ashyana-Jasmine Kachra (Jun)
- Collin Burns (Jul)
- Jeff Wu (Jul): Vox
- Steven Bills (Jul)
- Yuri Burda (Jul)
- Jonathan Uesato (by Aug)
- Girish Sastry (Aug)
- Jan Hendrik Kirchner (Aug)
- Harri Edwards (Aug)
- Neil Chowdhury (by Oct 23): departure announcement (X)
- Miles Brundage (Policy lead + AGI Readiness lead) (Oct 25): departure announcement (X), departure announcement (Substack)
- Lilian Weng (Safety Systems lead) (Nov 15): departure announcement (X)
- Steven Adler (Nov): departure announcement (X)
The above lists do not include safety people who left during the 2024 exodus who I believe left for normal reasons or who unequivocally say they did: Ryan Lowe in March, Cullen O’Keefe in April, and cofounder John Schulman in August. (But Schulman left to work on safety at a rival company.)
They also do not include the non-safety-focused executives who left during the 2024 exodus: Sherry Lachman by May; Diane Yoon and Chris Clark in May; Peter Deng in July; Mira Murati, Bob McGrew, and Barret Zoph in September; and Alec Radford in December.
They also do not include the safety people who left before 2024, notably including the Anthropic cofounders (who left in 2020–2021 due to safety concerns) and Geoffrey Irving (2019).
Note also that in July 2024, Aleksander Madry was removed from his role as head of the Preparedness team.
Almost all of the people working on existential safety at OpenAI left in 2024, including all of the senior people.
People leaving OpenAI often said they would do more valuable work outside OpenAI. This is surprising since (1) OpenAI had powerful models and lots of compute and (2) improving safety at OpenAI was valuable since it was perhaps the leading AI company. This suggests those people were very sidelined internally.
The departures include many fomer members of the Superalignment team. OpenAI announced the Superalignment team in July 2023. OpenAI said "We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment." This commitment was widely misinterpreted as 20% of our compute for four years; really it meant within four years, put a total of 20% of the compute secured in mid-2023—an unspecified amount, likely very little in 2027 terms—toward aligning superintelligence. In May 2024, the leaders of Superalignment—Ilya Sutskever and Jan Leike—resigned and Superalignment was dissolved. Leike said "Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done." OpenAI had reportedly denied Superalignment's requests for compute and failed to give it a schedule for receiving compute. In July and August, several of the remaining formerly-Superalignment people left OpenAI.
Other relevant OpenAI history includes:
- Board crisis: the OpenAI board fired CEO Sam Altman because it didn't trust him, reportedly because he frequently lied and illegitimately tried to force out board member Helen Toner. After pressure from employees, the board resigned and Altman returned.
- Altman had previously forced out board members including Reid Hoffman and Shivon Zilis in early 2023.
- NDAs & equity: departing employees were required to sign a nondisparagement agreement or lose their equity. When this became public, OpenAI committed to change but lied about past events.