AI Supremacy

AI Supremacy

Share this post

AI Supremacy
AI Supremacy
State of AI, Datacenters and AI Capex
Copy link
Facebook
Email
Notes
More
Datacenters

State of AI, Datacenters and AI Capex

AI 2027, OpenAI shenanigans of AGI marketing, Standford AI Index 2025 conclusion and Datacenter report. Nvidia's stock and,are tariffs hurting the great datacenter boom?

Michael Spencer's avatar
Michael Spencer
Apr 08, 2025
∙ Paid
42

Share this post

AI Supremacy
AI Supremacy
State of AI, Datacenters and AI Capex
Copy link
Facebook
Email
Notes
More
2
12
Share
Meta Is Building a Giant $800 Million Data Center in Idaho | Data Center  Frontier
The great datacenter boom vs. tariffs. Will AI capex be hit?

Welcome Back,

Today I want to cover some reports I’ve been reading and talk to you about the state of AI Datacenters. Also in case you missed it, view my 2025 articles so far. I want to be providing the most value to my readers that I possibly can.

In case you Missed It (ICYMI)

My short wrap-of of recent AI News bundled here for quick consumption: 4-min 33 seconds:

1×
0:00
-4:33
Audio playback is not supported on your browser. Please upgrade.

In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years. Daniel worked as a governance researcher at OpenAI on scenario planning. More recently he’s teamed up with Scott Alexander (read their introduction) and others and they have released quite a weird almost sci-fi AI 2027 project.

Watch the Podcast.

The Super-Intelligence Threat of 2027

The result of their collaboration is “AI 2027,” a report and website released this week that describes, in a detailed fictional scenario, what could happen if A.I. systems surpass human-level intelligence — which the authors apparently expect to happen in the next two to three years.

AI 2027 is a “comprehensive and detailed” (and hopelessly accelerationist) scenario forecast of the future of AI. It comes from a tradition of LessWrong1 thinkers, and former OpenAI employees who appear to have AGI talking points and for whom Darkesh (the viral YouTuber) appears to act as a PR booster. Podcasts here and here.

Read the Report

Animation source: Jacqui VanLiew; Getty Images, via Wired.

“We’re exploring the frontiers of AGI, prioritizing readiness, proactive risk assessment, and collaboration with the wider AI community.” — Google DeepMind

  • Google also recently released a paper around AGI risks and mitigating them, trying to frame itself as taking a responsible approach to AGI, remarkably and seemingly inspite of Google decision to reverse its ban on AI weapons!

Read the DeepMind Blog

  • Meanwhile Microsoft quickly fired employee activists who interrupted Mustafa Suleyman (formerly known for bullying subordinates) at their 50th Anniversary celebration. Feel the AGI! Ilya must be feeling it, where SSI somehow has a $30 Billion valuation.

Setting the Record Straight

Keep reading with a 7-day free trial

Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Michael Spencer
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More