The unbearable hypocrisy of the AI labs: or, Ilya saw nothing

The last 6 months have been something of a PR crisis for both Anthropic and OpenAI, to whom the intense focus has shifted as they’ve seemed to take a rather commanding lead in the race to build the most sophisticated AI models. Now, this race has pretty back and forth, so there’s no saying that in a year of five years Google or Meta or open source alternatives or even Grok might be producing the leading model. But for now, OpenAI and Anthropic seem to have both the lead in technical capabilities and in attention. They might not want the lead in the latter after the start to 2026.

Anthropic started the year strong with a seeming consensus that Claude Code had improved to a point that it was truly a league ahead of competitor offerings, and its dominance as an enterprise product cemented. It accelerated that lead with the quasi-public release of Mythos, a model so strong it can’t actually be released, for fear that it will invalidate the security protocols of all other software (!!). Everybody wants a piece of the action, and Anthropic’s private valuation is soaring. Word on the street is that the company is trying to scale from 3,000 to 10,000 employees. Meanwhile, OpenAI is shedding senior executives (and some consumer product offerings like Sora) in the name of focus, to compete in the enterprise before losing the category to Anthropic. Google and Meta are presumably doing something, but mostly we just hear about them firing other employees to make more room for AI researchers, and nobody has ever used an AI model produced by Meta or Grok for anything serious anyway.

Sounds great, so why is all the attention bad?

Well, it turns out, running one of the most unprofitable enterprises in the history of business, while also trying to raise more money than any private company ever, while also loudly proclaiming that your technology will eliminate all jobs ever, puts you in a somewhat bad spot from a PR perspective. Sam Altman is enduring multiple scary but extremely futile attempts to do… something to his home. Dario seems to not realize that driving the runaway train and yelling “HEY THIS TRAIN MIGHT KILL YOU” to every media outlet who will listen doesn’t generate much sympathy from the soon-to-be hit by the train.

It was always likely to come to this tension, from the early days of the LLM AI craze. Ever since the release of ChatGPT, the purveyors of the technology have been in something of a bind. They seem to have truly believed that they were carrying a great moral burden, namely to ensure that this very powerful and disruptive technology was brought to market in the most ethical way possible. The early lore of Deepmind, Anthropic, and OpenAI is filled with philosophical disagreements about questions of ethics, which is certainly charming and lot more romantic in some way than relatively petty disagreements about money coming from display ads described in something like The Social Network. Sam Altman’s ouster, revisited by a long Ronan Farrow piece that formed a recent centerpiece of OpenAI’s own PR woes, was framed then as a drama about trust and obligation. When Ilya Sustkever left OpenAI, in the afttermath of this Shakespearean epic, the refrain was “What did Ilya see?”, the implication being that he saw some version of early AGI, and left because he wanted to take a different approach in guiding it into the world. I’m writing this because in the last 6 months I’ve become a lot more interested in exactly what “Ilya saw”, and consequently “what Ilya did thereafter”, because I think in retrospect we ought to have been a lot more cynical about the diffusion of AI researchers into competitive startups, and we ought to be a lot more skeptical today than we we are about AI’s future prospects.

“What did Ilya see?” became, to me, a question about not just specifically Ilya, but what moral obligation any extremely wealthy and intelligent AI researcher had to take action based on the state of AI research in the last few years. I’m not going to go into all the LessWrong, Eliezer Yudkowsky AI views here, because it’s not all that relevant to what I really want to explore. But it’s worth mentioning that this question about what Ilya or a hypothetical similarly positioned Ilya should do includes the context that the individuals involved are all extremely aware of, if not directly part of, a culture of extreme rationality. Further, a culture of extreme rationality directed towards the moral implications of AI, a technology that many in that culture believe poses existential risk to all of humanity (the best known work being titled “If anyone builds it, everyone dies”). At the time Ilya left OpenAI, there were rumors about the power of their next unreleased model, and public discourse imagined Ilya left because of the power of the technological breakthrough.

I think it’s worth looking at what the heads of these labs have been saying in public, to inform what the people leaving them believe the technology is capable of doing. Dario recently went on Fox News to say that 50% of essentially all white collar work will be “wiped out” in 1-5 years. In 2024 he wrote that “50-100 years of human progress” could happen in the next 5 years. Demis Hassabis in 2023 said that AI tools were on the cusp of developing to a degree that could be deeply damaging to civilization. In 2025 Ilya said that AI was close to doing “all the things a human can do”. There is talk of money being useless, soon! There are no shortage of additional citations I could make to similar claims by Sam Altman, Elon Musk, and from the already cited.

Ilya was not the only person to leave a great AI lab, and start something else. I think it’s worth looking at what they decided to do. Here’s a non-comprehensive list of major players from the early days at the most influential labs, and where they chose to take their talents (and how much money they raised to do it).

Dario AmodeiOpenAIAnthropic$61.5B valuation

Ilya SutskeverOpenAISafe Superintelligence (SSI)$1B raised

Mira MuratiOpenAIThinking Machines Lab$12B valuation

Mustafa SuleymanDeepMindInflection AI$1.3B raised

Arthur MenschGoogle DeepMindMistral AI$6B+ valuation

Noam ShazeerGoogleCharacter.AIabout $1B valuation

David SilverDeepMindIneffable Intelligence$1B seed round

The point I’m trying to make is as follows. If you are a hyper rationalist AI researcher who loudly proclaims (and believes) that AGI is around the corner, and will change the world in dramatic ways that potentially cause horrible societal impacts if poorly managed, and that you have a direct and real moral burden to DO SOMETHING about it… what do you do? We don’t have to guess, because at this point the evidence is quite solid and points at just one path – optimize making as much money / raising as much money as possible.

How can if be that every single departing researcher has chosen to raise as much money as possible or make as much money as possible, and none have done anything like start a policy nonprofit to advocate for coordination among AI labs, or coordination among governments, or any altruistic thing at all?

I want to take this question seriously. Here are the possible reasons I can identify.

POSSIBLE REASONS WHY EVERY AI RESEARCHER HAS CHOSEN PERSONAL ENRICHMENT INSTEAD OF SOMETHING ALTRUISTIC.

  1. They don’t believe their own timelines

Or, Ilya saw a mediocre progression of models and thought there was plenty of time to build a profitable competitor. To put it another way, based on everything we know about the leading AI models currently available, their capabilities were not exactly predicted 3 years ago, and certainly the labs didn’t know with precision where we’d be on the scaling curve 3 years later. But if you look at the dates of some of the citations above, many of the CEOs of these companies were pronouncing 3 years ago that the horizon was within reach, and as soon as right now we’d have AGI in our hands, changing the world. But based on the actual trajectory of the technology, they can’t have seen anything in terms of capabilities better than the models we have right now, and in fact they likely saw only rather inferior models. They knew 5 years was too aggressive a timeline for any meaningful change at this rate, and decided they had plenty of time to get in the game themselves. Even if they saw the models we have now, back then, this is what they had access to:

2. They believe they need as much money and power personally to effect reasonable change

I think what Ilya, and every other researcher on the above list, saw, was money. They saw that there was unlimited demand for not compute, but for companies that would say “there is unlimited demand for compute”. Perhaps this money was a means to an end, but this is a tightrope to walk intellectually when you are saying on one hand that money will be useless in a super abundant future, and on the other hand you need as much money before then as possible to make sure the AGI doesn’t turn us all into paperclips instead of giving us super abundance. Can all these rational morally burdened computer scientists really believe that only in their Aesop-scented hands will the future be secure?

3. They are hypocrites or liars

They don’t actually believe what they are saying, when they say that there is any kind of terminal date upon which the current rules of society and economics will cease to exist. Or, they do, but they are behaving in ways opposite from what they’ve suggested everybody else ought to behave. The only person in the AI giga influencer sphere who is behaving as if they believe what they are saying is Eliezer, who is loudly pronouncing that AI will kill everybody and doing everything he can to make sure as many people know that that could happen as possible. Everybody else is acting more like Jensen Huang or Yann LeCun are correct, that either this form of AI will never approach anything like AGI, and / or that there is no future point of inflection, just an infinite running curve along the same line as all technological progress.

4. Our capitalist society is so fundamentally flawed that even with myriad beliefs and motivations, the only correct action is to start a multi-billion $ company

It’s the attention economy, baby! The Carcinization into social media of all human output – if you swapped Ilya with Hari Seldon, old Hari would be on Sand Hill Road raising money for the Second Foundation AI Lab. I can’t name 10 public intellectuals in the United States, but I can probably name 10 shitposters about AI on Twitter. Regardless of your intent, if you believe big things need to happen to engineer the world in the direction it needs to travel, most of the people in the rationality cult believe having a Big Presence on Twitter is a key part to getting there, and the biggest presences on Twitter are the millionaire billionaires who run big tech companies.

The answer is probably some combination of the above reasons (most things are, after all, complex). Humans often contain such contradictions. It’s possible many true altruistic AI researchers also believe they need to found their own company and also shit-post about it on twitter to enhance it’s value to thread a needle on the timing between their own lab’s success and the emergence of AGI. I’ve often wondered if the conditions of the world order at present are perhaps as suboptimal for long term human flourishing as they could possibly be, for the moment of AI’s emergence. We have a tremendously polarized political environment in the US, where the majority of the AI labs are located. The only other country with any chance of developing AGI is China, where though their AI researches do have a slightly better track record of positive public statements about AI, the overall profit motive is similar enough to suggest that they are behaving rather similarly as the US AI technological complex. It’s perhaps most unfortunate then that we have so few people to look to for accurate information about what threat AI truly poses, and on what timeline – every AI researcher has placed themselves in a position where their own economic incentives are so strongly aligned to saying the most absurd doomer things that none of them can really be taken sincerely.

Leave a comment

Filed under Uncategorized

Leave a comment