AI Papers on the Spectrum of RSI and the Illusion of Thinking
"To be or not to be", is also the question for AI.

In recent times it appears academic papers are being used as a form of public relations for various ideological positions along the AI hype cycle spectrum. This isn’t really anything new but it is striking.
Recursive Self-improving AI (RSI)
A couple of papers caught my attention around this trend of RSI, when in mid May Google DeepMind announced AlphaEvolve. This also comes across in Anthropic claiming how much Claude Code is used to make Claude Code itself better, i.e. claims like Claude Code wrote 80% of its own code.
While Google says its deployed algorithms discovered by AlphaEvolve across Google’s computing ecosystem, including our data centers, hardware and software. There are also other research labs claiming other kinds of aspects of RSI or frameworks for it. While far fetched by the usual audacious claims by DeepMind, Sam Altman’s Gentle Singularity blog is even more extreme. As if the prophecy of Venture Capitalists could justify the enormous capital being spent on a pattern matching system like the transformer architecture where token prediction are coached or imbued with names like “reasoning models” and “superhuman intelligence”.
Keep reading with a 7-day free trial
Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.