Citadel's 2026 Global Intelligence Crisis Response to Citrini Research
I can't stop thinking about the Citrini Research piece, and not in a good way.
Good Morning,
I’m fascinated by how the way we see AI and its capabilities shapes how we see the future and its spectrum of possibilities. I must say, I myself err on the side of bearish realism and data based positions as my ideal baseline when it comes to anything remotely Macro.
A lot of the Generative AI movement has been exaggerated for profit motives. I’m critical of Venture Capital positions that don’t critically examine the impact of these technologies on human society and neglect offering a balanced view. A lot of the “Technological optimism” we see online is actually manufactured by those who have the most to gain and by various financial motives and incentives.
It’s been more than a week since the viral Citini Report: The 2028 Global Intelligence Crisis was released. I’ve been immersed in the dozens of rebuttals and look-alike articles related to it. I was horrified to see that a speculative piece with such conclusions could move markets and be entertained as widely shareable.
My favorite of these rebuttals by serious folk so far has to be Citadel’s more realistic report The 2026 Global Intelligence Crisis. These two essays are starkly different Macro visions of the future impact of AI and I think it’s worth the time of my readers to think a little about this.
Here is a chart that might be helpful:
What do you believe will be the impact of AI?
The Impact of AI is just one small force in a complex world
The labor market is a serious matter to America, and its vulnerability has mostly to do with factors outside the scope of AI or the nascent capabilities of AI agents. The major points of Citrini Research don’t align with either the data or historical trends of technological shifts, if Generative AI can even be called one of those shifts. I’m fairly skeptical. Remember, it has not delivered ROI or penetrated Enterprise companies or small businesses in a significant way a full 3.5 years later.
Any reasonable person knows AI adoption has been slow, not fast. The Citadel article to me is much closer to a realistic position that serves as a data-driven rebuttal to “viral AI doomsday” narratives (like the related “2028 Global Intelligence Crisis” scenario from Citrini Research), arguing that AI adoption remains slow and stable based on real-time data, with limited near-term evidence of massive labor displacement. It suggests AI could instead help offset other economic headwinds like aging populations and deglobalization through productivity gains. Macro analysis well, needs to consider the real-world.
I have no affiliation with Citadel obviously. The author if Frank Flight, who is a Macro strategist with a Masters in Economic and Social History from Oxford. For the record, Citadel is a hedge fund, the one founded in 1990 by Ken Griffin, the firm is known for its "multi-strategy" approach, meaning it invests across many different asset classes simultaneously to manage risk and generate consistent returns.
I don’t know Frank, but from what I can gather he’s a Global Macro Strategist at Citadel Securities, where he has recently gained prominence for his research on the intersection of artificial intelligence and the global economy.
“His analysis was covered by outlets like Bloomberg, Fortune, Business Insider, and Yahoo Finance, often framed as Citadel “demolishing” or “pushing back hard” against the viral doomsday narrative.”
I’m not in the habit of reading what Hedge Funds think about AI’s impact either, but given the virality of the Citrini Research piece, I think it deserves another look.
Frank Flight emphasizes that AI is more likely to complement human labor (boosting productivity, purchasing power, and consumption) than cause mass obsolescence.
I’ve been noticing a lot of academics, politicians, thought-leaders, mainstream Newsletters and obviously tech executives too - seem to be on the take mis-representing the real impact of AI, and who are sure to continue to exaggerate it in the years ahead. Even Wall Street is also exaggerating the impact of AI for profit and to create more volatility. I believe the actual scenario called reality is likely to be fairly different and lead to some unexpected outcomes.
So without further adieu, I’m going to share the piece directly written by Frank Flight of Citadel Security with you all so you can make up your own mind.

I will say that to me it’s deeply embarrassing Substack is known more for meme-clickbait than real or serious analysis.
The 2026 Global Intelligence Crisis
Written on February, 22nd, 2026. Read the original link.
By Frank Flight, London, United Kingdom.
TL;DR
Key Arguments from His Research:
Substitution Elasticity: He argues that AI will only replace human labor if the “marginal cost of compute” is lower than the cost of human labor. If compute costs spike due to energy and infrastructure constraints, human workers remain the more viable option.
S-Curve Adoption: He suggests that technological adoption follows a slow, non-linear S-curve rather than an immediate exponential explosion, meaning mass white-collar displacement is unlikely in the near term.
Productivity as a Positive: Flight views AI as a “productivity shock” that functions as a positive supply shock—lowering costs and increasing real income, which he believes is ultimately growth-enhancing.
Productivity gains from AI are likely disinflationary and supportive of growth, helping to counter secular headwinds like aging populations and deglobalization, while new business formation surges to absorb potential economic shifts.
Extreme doomsday scenarios (like rapid job obsolescence) require unrealistic assumptions of instant full adoption, total labor substitution, and no policy/institutional responses, which current real-world evidence strongly contradicts.
Resource Bottlenecks: Physical limits like energy costs and computing capacity will prevent AI from fully replacing human labor.
Economic Growth: Technological shifts historically lower costs and increase real income, changing job roles rather than eliminating them.
Human-Centric Roles: Regulatory protections and the need for human supervision and social interaction create a natural floor for employment.
I believe Frank Flight’s summary is a decent baseline for Wall Street’s current consensus on AI’s real impact. AI bears and bulls should understand basic macro analysis to strengthen their positions and storytelling. I don’t even think the Fed (The Federal Reserve) has a good handle on AI’s actual impact on the labor market and markets, or real AI adoption or real ROI (or the lack of it).
From Science fiction to Macro, that being said, no one position has all the data or will be 100% right. The exercise of thinking about AI’s impact is what’s important, not who is right.
The 2026 Global Intelligence Crisis
The year is 2026. The unemployment rate just printed 4.28%, AI capex is 2% of GDP (650bn), AI adjacent commodities are up 65% since Jan-23 and approximately 2,800 data centers are planned for construction in the US*. In spite of the current displacement narrative – job postings for software engineers are rising rapidly, up 11% YoY.
Despite the macroeconomic community struggling to forecast 2-month-forward payroll growth with any reliable accuracy, the forward path of labor destruction can apparently be inferred with significant certainty from a hypothetical scenario posted on Substack: The 2028 Global Intelligence Crisis.
We wrote last week that we see the near-term dynamics around the AI capex story as inflationary, but given markets are focused on the forward narrative, we outline a more constructive take on the end state below. Before that, however, it’s worth reflecting that the imminent disintermediation narrative rests on the speed of diffusion.
Job Postings For Software Engineers Are Rapidly Rising
What Does the Data Actually Say on AI Diffusion Speed?
The St Louis Fed has data on AI adoption from the Real Time Population Survey. The first order presentation of AI adoption is generally a binary question: Do you use AI? The more important question insofar as it relates to the AI displacement narrative is: how intensely is AI being used for work? We can tease out the answer from a subset of the St Louis Fed data that buckets by frequency of AI use. We would posit that if AI represents imminent displacement risk, the real time population data would show an inflection upwards in the daily use of AI for work. The data seems unexpectedly stable and presents little evidence of any imminent displacement risk (solid lines at the bottom of the chart).
AI Adoption Trends Do Not Look Non-Linear
Recursive Technology ≠ Recursive Adoption
The current debate around artificial intelligence conflates the recursive potential of the technology with expectations of recursive economic deployment. In other words, because AI systems can improve themselves or accelerate their own capabilities, commentators are extrapolating a future in which automation and productivity compound indefinitely at exponential rates. Technological diffusion has historically followed an S-curve. Early adoption is slow and expensive. Growth accelerates as costs fall, and complementary infrastructure develops. Eventually, saturation sets in, and the marginal adopter is less productive or less profitable which causes growth to decelerate.
Despite this – markets often extrapolate the acceleration phase linearly but history implies pace of adoption plateaus as organizational integration is costly, regulation emerges and diminishing marginal returns exist in economic deployment. The risk of displacement declines with a slower pace of adoption.
Adoption Rate of Generative AI at Work and Home versus the Rate for Other Technologies
Furthermore, it is well acknowledged that training and inference requires significant semiconductor capacity, data centers, and energy. Displacing white collar work would require orders of magnitude more compute intensity than the current level utilization. If automation expands rapidly, demand for compute definitionally rises, pushing up its marginal cost. If the marginal cost of compute rises above the marginal cost of human labor for certain tasks, substitution will not occur, creating a natural economic boundary. This dynamic contrasts sharply with narratives assuming frictionless replication of intelligence. Even if algorithms improve recursively, economic deployment remains bounded by physical capital, energy availability, regulatory approvals, and organizational change. Recursive capability does not imply recursive adoption.
Productivity Shocks Are Supply Shocks
At its core, AI-driven automation is a productivity shock. Productivity shocks are positive supply shocks: they lower marginal costs, expand potential output, and increase real income. They are in isolation disinflationary and growth-enhancing in the medium term. Historically, every major technological advance: steam power, electrification, the internal combustion engine, computing, has followed this pattern.
The counterargument suggests that AI differs because it displaces labor income directly, thereby suppressing aggregate demand. If firms produce more at lower cost, prices fall or margins expand (or both). Lower prices increase real purchasing power, which generally increases consumption. Higher margins increase retained earnings and investment capacity. If output rises and real GDP increases then by national income accounting identity something must be rising on the demand side: Consumption, investment, government spending, or net exports must be increasing (more here). A scenario in which productivity surges but aggregate demand collapses while measured output rises violates accounting identities. For AI to generate a sustained macro contraction one must assume that labor income falls and no compensating rise occurs in investment, fiscal transfers, or external demand. The surge in new business formation is an interesting point of reference here.
New Business Formation is Rapidly Expanding
Substitution Elasticity Constraint
The critical variable in AI displacement is the elasticity of substitution between AI capital and labor. If that elasticity is extremely high – i.e. firms can substitute nearly all human labor with automated systems at relatively stable cost – then labor’s share of income could collapse. In such a world, capital income rises dramatically while wage income contracts. But even here, aggregate demand does not automatically implode. Capital income has a lower marginal propensity to consume than wage income, but it does not have zero spending velocity. Profits can be reinvested, distributed, taxed, or spent. For demand to fall structurally, redistribution mechanisms would need to fail persistently, and investment opportunities would need to dry up simultaneously. Democratic nations facing such displacement risk would generally be expected to err towards in regulatory and fiscal policy shifts that offset the worst-case outcomes, further limiting substitution elasticity. Moreover – there is little evidence of AI disruption in labor market data as of today. In fact, the forward-looking components of our labor market tracking have improved and AI data center construction appears to be driving a pick-up in construction hiring.
US Labor Market Tracking Continues To Point to Improvement
The economy contains a vast array of tasks: physical, relational, regulatory, supervisory – that are costly to automate. Even cognitive automation faces coordination frictions, liability constraints, and trust barriers. It seems more likely that AI will be a complement rather than a substitute for labor is many areas. Historically, technological revolutions have altered task composition rather than eliminated labor as an input. To produce a negative demand shock large enough to overwhelm output expansion, one must assume near-total automation of economically relevant labor combined with extremely weak redistributive responses. To frame this debate correctly one can simply ask, was the advent of Microsoft office a complement or substitute for office workers? Ex-ante the concern skewed towards substitution, ex post it appears a clear complement.
Data Centre Construction is Boosting Construction Hiring
The 15 Hour Work Week
In 1930, John Maynard Keynes wrote “Economic Possibilities for our Grandchildren,” predicting that productivity growth would be so powerful that by the early twenty-first century the workweek would fall to fifteen hours. He was directionally correct about productivity growth, but profoundly wrong about labor market implications. Rather than working dramatically less, societies consumed dramatically more. Why? Because rising productivity lowered costs and expanded the consumption frontier. Preferences shifted toward higher quality goods, new services, and previously unimaginable forms of expenditure. Leisure increased modestly, but material aspiration expanded far more. History suggests productivity gains do not automatically translate into labor withdrawal or demand collapse as they alter the composition of demand, expand real incomes and generate new industries. Keynes underestimated the elasticity of human wants.
Conclusion
For AI to produce a sustained negative demand shock, the economy must see a material acceleration in adoption, experience near-total labor substitution, no fiscal response, negligible investment absorption, and unconstrained scaling of compute. It is also worth recalling that over the past century, successive waves of technological change have not produced runaway exponential growth, nor have they rendered labor obsolete. Instead, they have been just sufficient to keep long-term trend growth in advanced economies near 2%. Today’s secular forces of ageing populations, climate change and deglobalization exert downward pressure on potential growth and productivity, perhaps AI is just enough to offset these headwinds. The macroeconomy remains governed by substitution elasticities, institutional response, and the persistent elasticity of human wants.
End of quoted piece.
Addendum and Editor’s Notes
Like I have mentioned many times in AI Supremacy Newsletter, real-world adoption of AI is extremely slow. This is especially true in Enterprises where many early-stage Gen AI pilots failed. Disruption is far more likely to occur from younger companies that embody agentic AI principles, if at all. Block firing 40% of its employees is a sign of a bad business with too many employees, not evidence of Citrini Research being right.
“This world is full of inertia, bureaucracy, real world budgets, and old ways of doing things. Generative AI is an extremely clumsy and imperfect technology, at least in 2026.”
By the time Generative AI diffuses into society (think years to decades), we’ll likely have new computing architectures and more holistic paradigms of AI that replace traditional LLMs. Scaling LLMs with more compute is not the path forward, but building complementary AI systems that work together in unity.
Generative AI can save time but is unlikely to automate knowledge workers in the near future. Rapid changing of roles within Tech companies does not necessarily translate or reflect broad labor patterns in the economy as a whole.
Enterprise AI can improve greatly and slow the demand for entry positions in some fields. While hopefully creating new jobs and new kinds of positions, while there’s very limited evidence Gen AI is creating new jobs at all.
How the labor market evolves has many variables, and AI is just one of them in a world of demographics, tariffs, aging populations, unemployed youth, increasing wealth inequality, talent mismatch, early retirements, supply-demand constraints and a multitude of other factors like changes to immigration policy.
Let’s not use AI as an excuse or exaggerate its importance in a way that distorts the actual data or our view of the future. This world is full of inertia, bureaucracy, real world budgets, and old ways of doing things. Generative AI is an extremely clumsy and imperfect technology, only good at specific things. Refer to Yann LeCun’s ideas around the limitation of LLMs if you are confused.
In the real world, there are many bottlenecks that greatly slows down how Generative AI impacts the economy, labor market, markets and the future of jobs. Any serious Macro analysis obviously needs an economic model that accounts for all the various variables.
The 2026 Global Intelligence Crisis: Key Pillars
It’s all interesting to think about. We need more macro-economic analysis of the future of AI. Not more AI 2027 or 2028 scenarios that fundamentally exaggerate the capabilities of the current phase in the real world.
The Citrini Research report and Citadel positions display vastly different views of how AI plays out in the future.
The fear of automation has always been used by those with the financial incentives to do so. The pressure on entry level jobs and some professions is real, but we shouldn’t generalize that to all of society or to vastly different kinds of jobs. The long held assumption that repetitive physical labor would be automated first, is likely to be wrong however. Of course neither the view of Citrini or Citadel are comprehensive views about the future of AI’s impact, but they have been trending of late.
We cannot afford to be too bullish or too doomsday scenario negative, as they will skew how we interpret the data in the coming years. I believe cautious skepticism is a more reasonable baseline.
Towards a New Definition of Work and Labor in Human Meaning
For the most part we don’t know yet how the future of AI will impact jobs but the majority of reports, economists hired to write about the topic, and even serious academics, universally exaggerate its impact. If AI is impacting coding and SWEs the most, then one would have to consider starting a new company has become a more appealing opportunity for the next generation, due to the reduced engineering costs for the AI natives (which one has to presume the Alpha cohort, born between 2010 and 2024 will be) and next generations as youth unemployment will become a bigger issue.
I Expect Rising Youth Unemployment due to Many Factors
As of March 2026, the youth unemployment rate in the United States for the 18 to 24 cohort is approximately 8.1% (blended), though the Bureau of Labor Statistics (BLS) typically breaks this down into two sub-categories: 16–19 and 20–24.
If AI is impacting career ladders and entry level jobs the most (i.e. lower hiring of the 2020s), as some analysts seem to believe - young people in the U.S. could be the biggest losers of the AI revolution. For now in 2026, the U.S. is fairly resilient.
Anyways I hope you found this interesting and accessible.















