.On Friday, Sam Altman was fired as CEO of OpenAI—arguably the most important artificial intelligence company in the world. The announcement caught almost everyone off guard including its biggest investor Microsoft, while making a loaded insinuation of misconduct from Altman by stating he had not been “consistently candid in his communications.”
After an OpenAI executive stated the removal was “not made in response to malfeasance,” the previous statement felt like a false pretext —lending credence to swirling rumors of a boardroom coup. An attempt was made to bring Altman back after an employee revolt, but negotiations fell through Sunday night. Sam would seemingly not return and his initial replacement, CTO Mira Murati, would also leave. Emmett Shear, co-founder of Twitch, would step in her place. The coup d'état was complete.
The seeming conclusion to this drama left many questions unanswered. If there was no misconduct, what justified ousting Sam Altman? Why the flimsy pretext? And what was with the seemingly random choice of Emmett Shear?
On Monday, OpenAI’s chief scientist Ilya Sutskever—widely considered to be the leader of the coup—started to express regrets and then signed a letter calling for the board to resign for, among other things, suggesting destroying the company was “consistent with its mission” This threw into question whether he led or followed.
New York Magazine tech journalist Kara Swisher posted a thread on Twitter stating that the central tension seemed to arise from the “profit versus nonprofit” factions of OpenAI. “[The] profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds,” she wrote.
While Altman also harbors concerns about AI risk, he’s not been shy to accelerate its development, it was under his stewardship we find ourselves in the current AI boom.
None of it makes sense until you understand the motivating force behind the actions: a reactionary ideology seemingly shared by each member of the coup to varying degrees. They all ascribe to the idea that superintelligent AI risks wiping out humanity, a key tenant of an ideology called “effective altruism” (EA), a movement which advocates will say is about doing the most amount of good and thinking about what’s best for humanity in the long run.
However, critics liken it to a “dangerous cult,” with a fixation on AI wiping out humanity. The remaining board members all share tacit and explicit links to the movement.
When asked about these connections last week by VentureBeat, an OpenAI spokesman said, "None of our board members are effective altruists," adding that "non-employee board members are not effective altruists.” However, this claim was quickly debunked.
OpenAI board member Tasha McCauley sits on the UK board of the Center for Effective Altruism, along with its founder and leader William MacAskill, author of EA bible What We Owe the Future. McCauley is also a board member of the Center for the Governance of AI (GovAI) along with fellow board member Helen Toner. GovAI was spun out of the Future of Humanity Institute, founded by the father of the modern AI existential risk debate Nick Bostrom.
Toner has done multiple talks at EA conferences over the years and worked at Open Philanthropy, the charitable organization of noted EA Facebook billionaire Dustin Moscovtiz. Open Philanthropy has injected $300 million into EA aligned orgs related to AI existential risk. Toner likely directed a large part of that funding while there, since her role was to “scale-up from making $10 million in grants per year to over $200 million,” according to a profile on the Future of Humanity website.
(Toner would later join OpenAI’s board of directors, replacing co-founder of Open Philanthropy Holden Karnofsky, who secured the board seat after giving $30 million to OpenAI.)
As for the other board members, Sutskever has long expressed belief sentient AI was possible and an existential threat. A recent profile in The Atlantic of the OpenAI drama reported that Ilya acted like a “spiritual leader” and had even “commissioned a wooden effigy from a local artist that was intended to represent an ‘unaligned’ Al” and then “set it on fire to symbolize OpenAl's commitment to its founding principles.”
The newly installed interim CEO Emmett Shear said recently in an interview that AI existential risk should make you “shit your pants.” In September, he stated he’d reduce the current pace of AI development from a “10 to a two.” Then there is his June 2023 tweet where he stated he’d “rather the actual literal Nazis take over the world” than “flip a coin” and risk existential doom from AI.
So, in the end, Altman’s ouster was not about misconduct; it was about ideology and the exiling of perceived heretics who’d accelerated the development of AI too fast. While OpenAI was founded in the spirit of developing safe Artificial General Intelligence (AGI), it was not created to destroy, prohibit or constrain its development. A tiny group of doomsayers hijacked OpenAI and possibly changed the course of AI for the U.S. and the world. Their influence doesn’t stop here and deserves more scrutiny going forward.
Toner currently works at Georgetown University's Center for Security and Emerging Technology, an organization founded with nearly $100 million dollars in donations from Open Philanthropy. The center is playing a key role in shaping the conversation around AI and risk throughout the world.
There's also Georgetown University’s financial entanglements with the Chinese Communist party including the Initiative for U.S.-China Dialogue on Global Issues and the chilling effect it has had on campus. It’s a pretty significant detail considering China’s stated mission of leading the world in AI and its predilection with using academia as a means of industrial espionage.
Toner likely isn’t worried though. In a June 2023 Foreign Affairs article, she pushed back on Altman’s warning that over-regulation could put China ahead, retorting that regulation wouldn’t undermine U.S. competitiveness with China, whose AI capabilities she claimed were overstated. Persecuted Uyghur Muslims may beg to differ, an issue Effective Altruists are strangely quiet about—likely because China is key to the goal of fostering a global AI regulatory body.
Perhaps turning a blind eye is better than flipping a coin on AI doomsday.