Welcome Back,
Everyone from OpenAI to DeepSeek claims they are an AGI startup, but the way these AI startups are proliferating is starting to get out of control in 2025. I asked Futuristic Lawyer
, to look into this trend.On 14 April 2023, High-Flyer announced the start of an artificial general intelligence lab dedicated to research developing AI tools separate from High-Flyer's financial business. Incorporated on 17 July 2023, with High-Flyer as the investor and backer, the lab became its own company, DeepSeek.
But while saying you are an AGI research lab has come into popular fashion in marketing terms in recent years, does anyone even believe AGI is a real thing or that today’s architecture even has the capability of attaining it?
The definition of and the date when it is achieved are both hotly debated. However it seems actual machine learning engineers and researchers don’t actually think the current LLM architecture can reach this apparent goal.
How AI Researchers view AGI
According to a recent survey of 475 AI researchers by the Association for the Advancement of Artificial Intelligence (AAAI) conducted as part of its panel on the future of AI research found that “[t]he majority of respondents (76%) assert that ‘scaling up current AI approaches’ to yield AGI is ‘unlikely’ or ‘very unlikely’ to succeed, suggesting doubts about whether current machine learning paradigms are sufficient for achieving general intelligence.”
The “Creating Einstein in a Datacenter” Problem
Maxwell Zeff of TechCrunch recently wrote a great analysis of this. In a piece this month, Thomas Wolf, Hugging Face’s co-founder and chief science officer, called some parts of Amodei’s vision “wishful thinking at best.”
In Wolf’s opinion shared by Yann LeCun (and too many others to name), today’s LLMs simply aren’t up to the task (of AGI).
“I am not interested anymore in LLMs. They are just token generators and those are limited because tokens are in discrete space. I am more interested in next-gen model architectures, that should be able to do 4 things: understand physical world, have persistent memory and ultimately be more capable to plan and reason.” - Yann LeCun, Nvidia GTC, 2025
However comparing models to PhDs (which is absurd on the face of it) to promises of AGI aren’t stopping AI startups from raising huge funding rounds. Silicon Valley is in the business of sales, and Sam Altman is like a drunken sailor of false promises. This grift to Venture Funds and the public hasn’t just been a kind of Silicon Valley hallucination or hoax, it’s led to BigTech spending tens of Billions on AI Infrastructure.
It’s shaping up to be a very expensive narrative.
“To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask.” - Thomas Wolf, Hugging Face.
The AGI Startups
The spin-offs from OpenAI seem to be more keen to wear the AGI mantle.
The recent statements of Anthropic’s CEO have been especially concerning. While OpenAI itself might be the ultimate pretender, pretending that you are building something essential that enables AGI to take place is getting fairly popular in 2024 and 2025 as well:
We now have a legion of so-called AGI startups
Generative AI investment reached over $56 billion in venture capital funding alone in 2024. How much will it reach in 2025? How much will Capex for datacenters and AI Infrastructure increase in the frantic years ahead?
My Proposed List of AGI Startups
The person at Microsoft who wrote the “Sparks of AGI” paper even ended up joining OpenAI. Microsoft first investing vast sums of money in OpenAI in April, 2023 signaled the start of a new kind of grifting culture in Silicon Valley, that has even spread to China in 2025.
Anthropic by seven co-founders (now Billionaires)
xAI by Elon Musk ⭐
Safe SuperIntelligence (SSI) by Ilya Sutskever ⭐
Thinking Machines Lab by Mira Murati ⭐
Ndea by François Chollet ⭐
DeepSeek by Liang Wenfeng
Reflection AI by Misha Laskin
Moonshot AI by Yang Zhilin and Zhang Yutao
Zhipu AI by Tang Jie and Li Juanzi
Care to estimate the total amount in U.S. dollars the above startups are going to raise in their lifetime? It’s going to be a staggering amount. These are the research labs that I most consider aligned with being an “AGI startup” as of March, 2025. They are likely to keep multiplying, some of them even taking robotics form, e.g. Generalist AI.
These companies aren’t building machines of loving grace and their efforts are very likely going to waste a lot of valuable capital, human talent, and time. Dario Amodei, the CEO and cofounder of Anthropic recently claimed AI will write 90% of code in 3-6 months and nearly all code within a year, potentially transforming software development that impacts the industry significantly.
This is clearly serious business! The founder of Reflection AI didn’t think AGI was even good enough, they called their startup a Superintelligence startup, previously only dared by Ilya Sutskever himself. Thinking Machines Lab by Mira Murati in fact is so serious, it needs dozens of former OpenAI employees to get off the ground running. After the commercial success of Anthropic (the first spin-off of the OpenAI Mafia), anything goes! 🚀
Futuristic Lawyer
On Futuristic Lawyer Newsletter, Tobias Jensen writes about the knowledge gap between big tech companies and democratic institutions.
"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist.
BigTech and now European and Chinese capex in AI Infrastructure and the funding of these “AGI startups” is starting to get out of control in 2025. Even when there is consensus that scaling up LLMs won’t lead to anything approaching AGI.
François Chollet himself a founder on our list with Ndea, as himself argued that while AI might be capable of memorizing reasoning patterns, it’s unlikely it can generate “new reasoning” based on novel situations. Just don’t tell that to Japanese based Sakana AI. These young prodigies from either OpenAI or Google, usually have commercial interests in pretending otherwise. Ndea itself plans to use a technique called program synthesis, in tandem with other technical approaches, to unlock AGI.
Articles by Tobias Jensen
AI Could be Heading Towards the Trough of Disillusionment
How to Deal with Data Harvesting AI Girlfriends?
China's Position in AI & BigTech, an Interview with
In this article, Tobias Jensen does a deep dive on some of the AGI startups worth mentioning in 2025:
For less than $2 a week unlock our full coverage.
The New Generation of AGI Startups
By Tobias Jensen, March, 2025.
We are seeing a new generation of AGI startups emerge into the market. They are not racing to compete with the capital expenditures of OpenAI, Microsoft, Meta, Google, Anthropic, x.AI, and others. Instead, they are reimagining whole new approaches to AI development and exploring alternative pathways to AGI.
See other pieces I have written about AGI on AI Supremacy here:
Three AGI Startups
In today’s guest post, we will take a look at three notable AGI startups: Safe Superintelligence (SSI), Thinking Machines Lab, and Ndea.
Each of these is co-founded by former top researchers at the major labs in the US, Ilya Sutskever, OpenAI’s former Chief Scientist, Mira Murati, OpenAI’s former Chief Technology Officer, and François Chollet, former software engineer and deep learning researcher at Google, respectively.
At least on paper, Sutskever, Murati, and Chollet held very attractive positions at the world’s leading AI labs. So - why did they decide to leave their well-paying day jobs and pursue ventures of their own?
For different reasons of course, but there seems to be a common thread.
Ilya Sutskever told Reuters in November 2024:
"The 2010s were the age of scaling. Now we're back in the age of wonder and discovery once again. Everyone is looking for the next thing. Scaling the right thing matters more now than ever."
Mira Murati has not shared much about her reasons for leaving OpenAI but she did say:
“I want to create the time and space to do my own exploration.”
During an epic episode of Dwarkesh Patel’s podcast, François Chollet made several statements about the current state of AI which also reveals his motivation for founding Ndea:
“if you scale up the size of your database and you cram into it more knowledge, more patterns and so on, you are going to be increasing its performance as measured by a memorization benchmark. That's kind of obvious. But as you're doing it, you are not increasing the intelligence of the system one bit. You are increasing the skill of the system. You are increasing its usefulness, its scope of applicability, but not its intelligence because skill is not intelligence. And that's the fundamental confusion that people run into is that they're confusing skill and intelligence.”
Later in the episode, Chollet says:
“I think OpenAI basically set back progress towards AGI by quite a few years, probably like five to ten years, for two reasons. And one is that, well, they caused this complete closing down of research, frontier research publishing. But also, they triggered this initial burst of hype around LLMs.
And now LLMs have sucked the oxygen out of the room. Like everything, everyone is just doing LLMs. And I see LLMs as more of an off-ramp on the path to AGR, actually. And all these new resources, they're actually going to LLMs instead of everything else they could be going to.
if you look further into the past to like 2015, 2016, there were like a thousand times fewer people doing AI back then. And yet I feel like the rate of progress was higher because people are exploring more directions. the world felt more open-ended.”
Sutskever, Murati, and Chollet are all expressing a desire to explore new approaches to AI development. Although Chollet is the only one who dares to speak out on record, my guess is they all have identified the same limitation. The pioneering AI labs display a tunnel-vision focus on increasing data, compute, and parameters to improve the capabilities of AI models. We call it the “scaling paradigm” or the “Bigger-is-Better Paradigm”. Sam Altman describes the idea in a recent blog post on his website about the economics of AI:
“The intelligence of an AI model roughly equals the log of the resources used to train and run it”.
The approach is conveniently simple for a non-technical person like Sam Altman to explain to investors. More money, more intelligence. But is OpenAI climbing the wrong ladder by focusing so much on scale, while neglecting entirely new directions of research?
A paper from September 2024 by Research Director at Inria Saclay Centre in Paris, Gael Varoquaux, AI and Climate Lead at Hugging Face, Alexandra Sasha Luccioni, and Signal CEO, Meredith Whittaker convincingly refute the Bigger-is-Better Paradigm with three main arguments:
The Bigger-is-Better Paradigm is not sustainable since “compute demands increase faster than model performance”. Concretely, performance on benchmarks tends to saturate on many tasks after a certain point.
The focus on scaling general-purpose AI stirs attention away from studying how the models really work which in turn hampers efforts to audit and evaluate models. Additionally, for many specialized applications (e.g. in health, education, or the climate) “utility does not require scale” meaning that smaller models can suffice.
The Bigger-is-Better Paradigm exacerbates a concentration of power in the hands of a few actors while threatening to disempower others in the context of shaping both AI research and its application throughout society.
DeepSeek ⭐
Most readers will recall how a certain Chinese startup took the internet by storm a few weeks ago by publishing astounding test results with their open-source reasoning model DeepSeek-R1, delivered on a very modest training budget. Should DeepSeek be included in the name pool of new AGI startups? Let’s briefly address that, before we move on to talking about SSI, Thinking Machines Lab, and Ndea.
The company’s CEO Liang Wenfeng has explicitly mentioned in interviews that DeepSeek is focused on “achieving AGI”, not on monetization strategies. However, this could change, and change fast. South China Morning Post (SCMP) reported on February 18 that the company DeepSeek has updated the scope of its business registry information to include “internet information services” - a move which could indicate plans to monetize its technology.
And why not? DeepSeek’s commercial potential is enormous. The company’s open-source AI models now provide AI assistance in devices by Chinese smartphone makers such as Huawei Technologies, Oppo, Vivo, Lenovo Group and Honor. Tencent Holdings and Baidu have integrated DeepSeek’s models into their search products. Chinese car markers including BYD, Geely, Great Wall Motor, Chery Automobile, and SAIC Motor have announced plans to use DeepSeek’s models in their vehicles. Even local government agencies are incorporating DeepSeek into their digital systems.
Much has been written about how DeepSeek outperformed America’s tech darlings on a shoestring budget. Nonetheless, semiconductor research and consulting firm SemiAnalysis estimates that DeepSeek has spent well over $500 million on computing hardware alone over the company’s history, excluding steep R&D costs and other preliminary costs that are not accounted for in DeepSeek-R1’s research paper.
All this evidence considered, I think of DeepSeek as a competitor to the major AI labs in the US, more so than a part of the new generation of AGI startups.
Safe Superintelligence (SSI) ⭐
Ilya Sutskever is among the most prominent names in the AI industry. He was a co-founder of OpenAI and worked as the company’s Research Director/Chief Scientist for more than 8 years. Sutskever was the main contributor to the architecture of OpenAI’s early GPT models, so I don’t think it’s an overexaggeration to say that he laid the foundation for OpenAI’s success.
On a personal level, Sutskever is well-known for possessing an esoteric belief in the potential of AI and the coming of a new “superintelligence”. According to a popular article by The Atlantic, Sutskever unironically led employees in a chant at an OpenAI company party: “Feel the AGI! Feel the AGI”.
During the chaos at OpenAI chaos in November 2023 when Altman was fired and rehired as CEO in the same week, Sutskever voted to remove Sam Altman as CEO, but then apologized on X afterward and signed an employee letter calling for the entire board to resign and for Altman to return. The course of events, which we will probably never know the full details of, was in all likelihood a decisive factor in Sutskever’s departure from the company. Additionally, Sutskever was heading OpenAI’s efforts on general AI safety research in the “superalignment team” which was officially disbanded soon after he left in May 2024.
Safe Super Intelligence (SSI) Inc. was founded in June 2024 by Sutskever with Daniel Levy, former OpenAI researcher, and Daniel Gross, former AI lead at Apple. In breaking with OpenAI’s commercially focused approach, SSI is not concerned about selling everyday AI products to consumers but on rapidly advancing “superintelligence” while emphasizing safety:
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
Even before SSI has launched a product, a road map, or a business plan, the company has raised $1 billion in capital from investors, including Sequoia, DST Global and SV Angel. As reported by Bloomberg, SSI is at the time of writing close to raising another funding round of more than $1 billion at a $30 billion valuation.
Quite impressive for a company that isn’t generating any revenue nor have plans to do so in the near future.

Thinking Machines Lab ⭐
Mira Murati played a role in OpenAI’s growth journey from a small non-profit organization on a bizarre mission to its national positioning as a key player in America’s energy infrastructure, security, and economic development.
Murati joined OpenAI in 2018 as “VP of Applied AI & Partnerships” with prior experience from Tesla as Senior Product Manager. She took the role of Chief Technology Officer (CTO) at OpenAI in 2020 and had the chance to serve as interim CEO for a few days during Sam Altman’s brief ouster. Murati announced that she would leave OpenAI in September 2024, shortly after OpenAI released its new series of “reasoning models” with the launch of o1-preview and o1-mini.
On February 18, 2025, Thinking Machines Labs came out of stealth. Murati is CEO and co-founder but the team consists of more than a dozen former OpenAI employees, including the chief scientist John Schulman who co-founded OpenAI and worked for a brief stint as AI safety researcher at Anthropic, the CTO Barret Zoph who served as OpenAI VP of research, and Lillian Weng who used to work as OpenAI’s President of Safety.
Thinking Machines Labs is structured as a Public Benefit Corporation (PBC) - a company structure used by for-profit companies with a public-good mission. As a PBC, Thinking Machines Lab does not only have a fiduciary responsibility to maximize returns for shareholders but also to publish an annual “Benefit report” where the company outlines its progress towards its public benefit goals.
Thinking Machines Lab is on a mission to improve the scientific community’s understanding of frontier AI systems to make them “more widely understood, customizable and generally capable”. The company also emphasizes that "scientific progress is a collective effort” and that it wants to collaborate with the wider community. A clear deviation from OpenAI’s famous closedness which has become an industry norm.
“Knowledge of how these systems are trained is concentrated within the top research labs, limiting both the public discourse on AI and people's abilities to use AI effectively. And, despite their potential, these systems remain difficult for people to customize to their specific needs and values.”
Thinking Machine Labs is working to build “more flexible, adaptable, and personalized AI systems" that can “work with people collaboratively”. The company avoids using terms in its mission statement such as "superintelligence" and "AGI" but this is clearly what the founders believe in and are working towards. The mission is to create “advanced multimodal capabilities” and “ultimately (..) unlock the most transformative applications and benefits, such as enabling novel scientific discoveries and engineering breakthroughs.”
Overall, it sounds like Thinking Machine Labs is on the same mission as OpenAI but with an approach rooted in scientific collaboration and open research, rather than scaling.
“We'll focus on understanding how our systems create genuine value in the real world. The most important breakthroughs often come from rethinking our objectives, not just optimizing existing metrics.”

Ndea ⭐
François Chollet launched Ndea in January 2025 with his co-founder Mike Knoop, both have a background in software engineering. Chollet brings nearly ten years of work experience from Google, while Knoop is the co-founder and former Product and engineering lead at the software automation platform and industry giant, Zapier.
Chollet is well-known in the AI community for two major contributions. First, prior to joining Google in 2015, Chollet developed the popular deep learning development framework Keras which provides an interface for AI neural networks in the programming language Python. Second, Chollet published the paper On the Measure of Intelligence in November 2019 which critiques classical benchmark tests for failing to measure AI’s general intelligence. The paper was accompanied by the ARC-AGI benchmark (ARC-AGI: Abstraction and Reasoning Corpus for Artificial General Intelligence) which is intended as a more truthful proxy measure for AI intelligence (or AGI). The test consists of many puzzles that are deliberately designed to be easy for sharp humans and difficult for AIs to solve.
Chollet and Knoop launched the ARC Prize Foundation in June 2024, which offered prizes of $ 1 million in 2014 for the teams who could deliver the best results on the ARC-AGI benchmark with an open-source solution. The winning team scored 53.5% on the benchmark and won $25.000, while the best paper was awarded with a prize of $50.000. A new ARC prize competition is expected to launch as soon as Q1 2025 based on a new version of the benchmark called ARC-AGI-2.
According to Ndea’s website, the name - like 'idea' with an 'n' - is inspired by the Greek concepts ennoia (intuitive understanding) and dianoia (logical reasoning). In the same spirit as the ARC Prize Foundation, Ndea is dedicated to exploring how AI can develop genuine intelligence rather than skillful maneuvering of benchmark tests. Specifically, Ndea wants to combine deep learning with another promising but much less mature research field known as “program synthesis”:
“Instead of interpolating between data points in a continuous embedding space, program synthesis searches for discrete programs, or models, that perfectly explain observed data. This allows it to achieve much greater generalization power with extreme data-efficiency, requiring only a few examples to learn.”
The goal is to supercharge scientific advancements with AI. To achieve this purpose, Chollet and Knoop’s claim and bet is that current deep-learning AI cannot suffice because it “crumbles when faced with open-ended problems” and is “constrained by what humans teach it”. A more general form of intelligence is needed to truly take AI development to the next level, and the scaling paradigm is not enough.
“Today, the acceleration of scientific progress hinges on one factor: AI capable of independent invention and discovery. This capacity is the gateway to advancements beyond our wildest imagination.”

Wrapping Up
The new generation of AGI startups are branch-offs from the major AI labs. They are not fixated on quickly bringing new consumer products to the market or generating fast returns to justify enormous compute spendings. Instead, the AGI startups are focused on exploring new research directions so “AGI can benefit all of humanity” in line with OpenAI’s mission statement which is arguably abandoned by the company in practice.
Ilya Sutskever’s SSI is focused on AI safety¸ and secondly, on building advanced capabilities. Mira Murati’s Thinking Machines Lab is focused on collaborative research and increasing the scientific understanding of advanced models. François Chollet’s Ndea is focused on AI autonomy and the research direction “program synthesis”. If the AGI startups can attract the right people and succeed in new research directions, they could potentially raise existential and uncomfortable questions to the established AI industry, like DeepSeek did.
We wish them the best of luck.
Editor’s Notes
Wolf thinks that AI labs are building what are essentially “very obedient students” — not scientific revolutionaries in any sense of the phrase.
“We're currently building very obedient students, not revolutionaries. This is perfect for today’s main goal in the field of creating great assistants and overly compliant helpers. But until we find a way to incentivize them to question their knowledge and propose ideas that potentially go against past training data, they won't give us scientific revolutions yet.”
Google DeepMind CEO Demis Hassabis said he thinks artificial general intelligence, or AGI, will emerge in the next five or 10 years. Of course, if you had asked him five years ago, you would have likely heard him say the exact same thing!
Google DeepMind, Anthropic, OpenAI have “talking points” on how to talk about AGI, how vague to be, how much to promise, and how much to awe (in the case of early OpenAI and Sam Altman, this included fear-mongering).
CEO of Nvidia Jensen Huang recently said that artificial general intelligence could - by some definitions - arrive in as little as five years. But what definitions are those and how many top people at his company behind closed doors would even actually agree?
AGI is Unlikely in the Near Term
If mostly machine learning researchers, by definition, don’t think it’s near - why does Silicon Valley culture perpetuate false narratives?
“ While large pre-trained systems (such as LLMs) have made impressive advancements in their reasoning capabilities, more research is needed to guarantee correctness and depth of the reasoning performed by them; such guarantees are particularly important for autonomously operating AI agents.” - Future of AI Research, Survey
It’s highly plausible or even probable that, large pre-trained models, such as LLMs, do not have the necessary adaptability, creativity, memory or complexity to realize any quantum singularity of sentience or meta-cognitive capacity. Comparing artificial systems to human capabilities isn’t just deceptive, it should probably be illegal. (OpenAI’s classic claim of its models being at or above PhD level for example is highly fraudulent)
On the topic of AGI, Chollet has expressed skepticism about Large Language Models (LLMs) being a pathway to AGI. He argues that current models are fundamentally limited in their ability to understand and generalize, particularly in out-of-distribution (OOD) tasks. Chollet suggests that the singular focus on LLMs has led to a stagnation in frontier research needed to develop AGI, as this approach risks overlooking critical advancements in other areas of AI research
In a recent paper, "Stop treating 'AGI' as the north-star goal of AI research," it’s clear that serious scientists are starting to get tired of the marketing campaign of OpenAI and BigTech here.
Yann on BigTehnology podcast recently: "Inventing new things requires a type of skill and ability that you’re not going to get from LLMs"
https://open.spotify.com/show/4ln6H9peIXhq19yv3CdOvE
Wacky valuations much? Perplexity was valued at just 500 million at the start of the year and will soon be valued at 18 billion.
It only has $100 million in annual recurring revenue.
The insanity cannot go on much longer.