BigTech's Cloud computing dominance consolidates AI capex bets
Cloud computing is growing in a different way with Generative AI adoption.
Good Morning,
Today we will be thinking about how Generative AI is fostering Cloud growth and how the Cloud computing industry works.
Value Added
Our guest today is a prolific writer and thinker on the political economy of innovation. If this interests you, consider subscribing:
By
Microsoft’s recent earnings signaled that Generative AI is accelerating their cloud Azure growth. This is a huge part of the ROI from AI capex that a lot of folk are missing. Azure revenue grew 33%, with 16 points of the growth associated with AI.
This is pushing over Cloud leaders in China like Alibaba, Huawei and Baidu to really double down on LLMs, AI chips and Generative AI. Outside of digital advertising, Cloud computing is the biggest business model in all of technology with the biggest total addressable market.
Cloud Computing Growth and Google Search Uncertainty
Microsoft’s cloud is there growing significantly faster than Amazon Web Services (AWS), due to their early investment in Generative AI. The global cloud computing market is now worth around $912.77 billion in 2025. Precedence Research predicts that the market will keep growing and could reach $5.15 trillion by 2034, with a CAGR of 21.2% over the next ten years.
Google Cloud that’s smaller than AWS and Azure is also growing at a health clip of saw its revenue increase 28% year over year. Google’s efforts in AI products is also enabling Google Cloud to have more traction. Its Search advertising prospects in the years ahead are a little more uncertain.
Recently, Apple executive testifying in a federal court in Washington as part of the Justice Department’s lawsuit against Alphabet said AI was on its way to replacing Search leading to a steep drop in Google stock price yesterday about 7.5%. That’s the third-largest share price decline since the company went public in 2004. As you can see, big things are happening in mid 2025.
The Gen AI Repositioning
The competition in Generative AI among BigTech players is a repositioning of existing markets like Cloud Computing, search advertising, social media , digital advertising as a whole and the entire SaaS industry and B2B space.
Alphabet said Waymo is providing more than 250,000 fully autonomous paid rides per week across the San Francisco, Los Angeles, Phoenix and Austin regions. And from Uber data it appears consumers prefer autonomous vehicle rides to human drivers. The average Waymo vehicle in Austin is “busier than 99%” of human-driven vehicles there, Khosrowshahi said. If a recession takes place in and around the end of 2025 and 2026, some analysts see Agentic AI products as having a unique window to stimulate some adoption and automation.
Rise of the Huawei Hydra
Meanwhile in China, Alibaba and Huawei are making incredible progress in LLMs, open-source models, AI chips and fortifying their own lead in the cloud for their region. ’ve asked
of AI Proem Newsletter to take a closer look again at Huawei. Huawei is obviously pivotal to China’s Cloud and AI strategy. Huawei is like China’s Apple, Cisco, Nvidia and Google all spun into one in a sense.Huawei is by default becoming the Nvidia of China and already Huawei Cloud is the second-largest player in the Chinese cloud service market, with a 19% share in Q3 2024.
Of course globally its cloud share only amounts to 2%. The West doesn’t have a Huawei equivalent. China is expected to catch up in some ways to the West in the semiconductor and Cloud computing industries by 2035. Due to exports restrictions by the West on AI chips, how China BigTech collaborates with AI startups like DeepSeek has already gotten a lot more sophisticated.
Huawei’s breakthrough in May is likely to be its new chip Ascend 910D, its most advanced AI chip yet.
Nvidia CEO on China
On April 30th Jensen Huang made some stunning remarks relevant to the rise of Huawei as a potential competitor.
Who and What to Watch?
Amazon (AWS)’s lead in the Cloud is significant however. In Q4 2024, Amazon Web Services (AWS) held approximately 30% of the cloud infrastructure services market. This is a slight decrease from its 31% share in Q4 2023. While it’s losing ground relative to Azure and Google Cloud, Amazon has in recent years significantly improved its digital advertising business. If Google begins to lose Search Ads market dominance, Amazon is well positioned.
Generative AI is thus not “bigger than mobile”, but is a significant layer in the Cloud for these B2B Behmonths.
Generative AI and the intersection of Cloud computing means these are the key Cloud entities companies to watch:
Huawei (and SMIC)
Google Cloud
Azure
Alibaba
Nvidia
AWS
Alibaba is more dominant in China than AWS is in the U.S. in the Cloud
Let’s note that Alibaba is the biggest Cloud computing provider in China and also among the most advanced with their Alibaba Cloud Qwen division. Alibaba has a bigger share of China’s market than AWS has in the U.S now.
Alibaba, the leader with a 36 per cent market share in the fourth quarter, has pledged to invest 380 billion yuan (US$52.4 billion) in computing resources and AI infrastructure over the next three years. Note also that Google and Amazon have significant equity in Anthropic, the leading Enterprise AI firm of the Generative AI movement.
Let’s go to the deep dive of the day:
The evolution of cloud computing
Value Added is a newsletter about the political economy of innovation.
Value Added
It’s rare to find a writer so keen on the history of technology and how it actually works and
of Value Added Newsletter has also been thinking about all of this quite deeply.Scholar in Residence 🎓
JS Tan is a PhD Candidate at MIT’s international development program, researching the political economy of innovation in China and the US. He previously worked in the cloud computing industry as a software engineer.
I consider him an expert on innovation. JS Tan is recognized for his contributions to data transparency, having recently won a 2023 MIT Prize for Open Data, which highlights his commitment to promoting accessible information in research and policy discussions. See Collective Action in Tech. He is among the most promising young scholars on Substack I’ve ever come across.
The evolution of cloud computing
Image: The ever-cultured JS chose the artwork Scramble: Green Double/Left N, Right 8 (1977) by Frank Stella.
Editor’s Addition
I’m not going to lie, 🕊️ I am also a fan of JS Tan’s interest in Tech activism. In the past he’s also written on topics such as bullshit tech work, Big Tech’s collaboration with Big Oil, and industrial relations in China’s tech sector. He’s incredibly prolific.
The untold story of Twitter’s union drive and how Elon Musk busted it
Why Trump's tariff won't revive American industry
How we forgot about production—and why it’s back on the agenda
America's Era of Hidden Industrial Policy
The Political Foundations of Green Industrial Policy
The cloud-as-utility business model
To rent or buy
tl;dr 🎧 Audio Version: 22 minutes 50 seconds:
Tan’s Works
☁️ Behind the AI Arms Race: U.S. vs. China Cloud Computing Comparison
🌍 Pluralism vs corporatism: why countries innovate differently
🐋 DeepSeek part 1: How new labor practices propelled an unknown AI firm to the top
🚀 DeepSeek part 2: An Outlier in China’s AI Innovation Ecosystem
For less than $2 a week, get access to our best work.
The evolution of cloud computing: from a basic utility to a platform for innovation
of Value Added The cloud is often likened to a basic, fungible utility such as electricity or water. Matt Wood, then-chief data scientist of Amazon’s cloud business, even explicitly said that it was their goal to deliver “computing power as if it was a utility.” And like a utility, the cloud quietly powers much of our digital lives—largely invisible yet completely essential. Just as a power outage can paralyze a city, a cloud outage can bring the digital world to a standstill.
But there’s something odd about comparing the cloud to a utility. The firms that dominate it—Amazon, Microsoft, and Google—are nothing like water, gas, or electricity providers, which operate in the background with little fanfare. (How many utility companies can you name off the top of your head?) Even telecom giants like AT&T or Verizon, which offer a more sophisticated kind of utility, market themselves around reliability and coverage rather than cutting-edge innovation.
By contrast, Amazon, Microsoft, and Google feel like an entirely different breed. Rather than acting like sleepy utility companies, quietly collecting revenue from every customer that touches their cloud, they are tech companies in the most classic sense—defined not by stability but by relentless disruption.
To be clear, these firms do offer the most basic, utility-like servers and likely always will. But where they diverge from traditional utility providers is in their relentless push to create value beyond these foundational services, from AI-powered computing clusters and custom-designed silicon chips to sophisticated software platforms. In this way, the cloud isn’t just about providing basic infrastructure—it’s about building differentiated, high-value services.
In this post, we’ll explore the evolution of the cloud business—how it began as a traditional utility-like service, offering basic, commoditized storage and computing at low costs. We’ll then examine its transformation into a business driven by differentiated, value-added services that command a premium.
Before the cloud, businesses had to purchase their own physical servers, networking equipment, and disk drives for storage. This physical equipment was installed in dedicated rooms within office buildings and wired up so that the business could run its IT services, such as email, productivity software, shared storage, etc.
The problem with this setup was that while buying generic servers was relatively affordable, maintaining them and keeping up with rapid technological advancements proved to be costly. As computer chips advanced according to Moore’s Law—which predicts that chip capacity doubles roughly every two years—companies faced a tough dilemma: upgrade their IT infrastructure to stay competitive or continue using outdated equipment and risk falling behind.
Upgrading wasn’t a simple task. It required IT professionals to physically remove old servers, install new ones, and migrate software and data—all while avoiding any disruptions to business operations. In fact, the cost and complexity of these upgrades were sometimes so high that CTOs would often choose to delay them, sacrificing the benefits of cutting-edge IT infrastructure to save on expenses.
On top of that, managing servers required paying for the physical space they occupied—which, for large corporations, could mean entire buildings. In effect, these companies were operating their own private data centers, requiring not just hardware upkeep but also the management of entire facilities. This included ensuring reliable power and cooling for the servers, as well as staffing security personnel, maintenance crews, and other support teams to keep everything running smoothly.
Cloud computing initially entered the corporate mainstream with the promise of alleviating these pains by providing IT infrastructure as a rentable service. Similar to other subscription-based models, it allowed users to rent servers instead of purchasing them outright—like renting a movie online instead of buying a DVD. By renting servers virtually, businesses no longer had to manage physical hardware. And with the constant pressure to upgrade servers to keep pace with Moore’s Law, companies using the cloud could stay up-to-date without the usual costs and complexities of handling upgrades themselves.
Indeed, the core of the cloud business model was the simple message that using the cloud would be more cost-efficient than buying and managing your own hardware. The reason was twofold. First, with the cloud, companies would only need to pay for the IT resources in use. Unlike the traditional IT setup where businesses had to pay large upfront costs to purchase servers—often buying more capacity than needed to account for future growth—the cloud would shift the (previously CapEx) costs to an operating expense (OpEx). This allowed companies to go the asset-light route, scaling their infrastructure costs up or down based on their day-to-day needs.
Second, cloud providers benefit from economies of scale. Cloud providers have poured hundreds of billions of dollars into building data centers, allowing them to achieve a scale that that only few businesses could ever match. These cost savings on hardware, energy, and maintenance then get passed on to customers, making cloud services more affordable to businesses.
Price wars
With cost savings as the cloud’s primary selling point, providers competed aggressively on price, focusing on offering the lowest rates for basic, commoditized server resources. The cheaper a provider could deliver its services, the more competitive it became in the marketplace. Eventually, this competition led to a price war in the early years of cloud computing, where any price cut announced by one vendor was quickly matched, if not undercut, by another within days.
According to Data Center Knowledge, Amazon, Microsoft, and Google made 25 price cuts on basic cloud resources like compute and storage between early 2012 and March 2013. This trend accelerated in 2014, with price reductions becoming both larger and more frequent. CRN, an IT trade publication, reports:
"Google cut pricing for its Compute Engine Infrastructure-as-a-Service by 32 percent and storage by 68 percent for most users. A day later, AWS (Amazon’s cloud) slashed pricing for its cloud servers by up to 40 percent and cloud storage by up to 65 percent. A few days after that, Microsoft said it's introducing a new entry-level cloud server offering that will be up to 27 percent cheaper than its current lowest service tier."
In a paper published by the National Bureau of Economic Research, Byrne et al. (2017) conducted a systematic analysis of compute and storage prices among the top cloud providers, finding that between 2009 and the end of 2016, cloud processing costs dropped by approximately 50%, while storage prices declined by 70–80% over the same period.
In many ways, this race to the bottom mirrored the logic of the Silicon Valley playbook: aggressively subsidize costs to capture market share, then rely on scale and network effects to create market dominance. The strategy was simple—sacrifice short-term profits to achieve long-term control. By making the cloud as cheap as possible, providers expanded their market with the hopes that new customers would be locked into their ecosystems. The idea was that once a business had migrated its applications, data, and workflows to a particular cloud platform, the effort and costs required to move elsewhere would become prohibitive.
In fact, all three cloud firms had previously found tremendous success using this very strategy—monopolizing markets by temporarily slashing costs or even offering services for free. Google did it with search, Amazon with online retail, and Microsoft with its productivity software and operating system. And so, armed with billions in cash and a playbook that had worked before, the cloud giants dove headfirst into a tit-for-tat price war, aggressively cutting costs throughout the early to mid-2010s.
Low margins
In some ways, the price war benefited the industry. It lowered the barrier to entry, encouraging businesses that might have been hesitant to shift their IT infrastructure to the cloud. For providers, however, it was an expensive gamble—an aggressive attempt to gain market share. And nearly a decade later, it seems to have been just that: an attempt, with somewhat mixed results.
The core issue with this strategy was that, unlike classic digital platforms, cloud computing doesn’t benefit from strong network effects. Platforms like Instagram or Uber become more valuable as more people join—creating a self-reinforcing loop that drives growth. In contrast, the value a customer derives from using a particular cloud provider isn’t directly tied to how many other customers that provider has.
Just as importantly, the revenue potential from each customer also varies widely—a Fortune 500 company can spend millions on cloud services, while a small business like a neighborhood café might spend only a fraction of that. This means, having one big customer might be significantly more valuable to the cloud provider than hundreds of small ones.
Admittedly, evaluating the efficacy of cutting costs on increasing the market share of any individual vendor is an entire research project on its own. However, from looking at the overall movement in market share in the 2010s, it would appear that this strategy didn’t play out quite as planned. Amazon, which was the most aggressive in its price cuts, held its market share steady at just over 30 percent for most of the decade. Microsoft, on the other hand, saw a consistent rise during the same period. It’s worth noting that smaller cloud vendors such as Dimension Data, Joyent, and GoGrid were eventually priced out of the industry—so in that sense, the strategy had some success. But when it came to dramatically reshaping the competitive landscape among the top players, the results were far less impactful.
One reason for this is that, by the mid-2010s, customers started to take a multi-cloud approach rather than relying solely on a single vendor for all their IT needs. This approach helped companies avoid vendor lock-in, giving them greater flexibility and bargaining power. Another reason was that some companies had built-in path-dependent advantages. For instance, Microsoft was able to leverage its long-standing enterprise connections from its Windows Server business, making it a natural choice for many corporations already embedded in its ecosystem.
Whatever the reasons, by the late 2010s, the cloud providers—at least those still in the game—seemed to recognize that price competition alone wasn’t sustainable. Even though the top players were flush with cash, none of them were truly profitable under the existing model. AWS, the market leader, didn’t turn a profit until 2015—nine years after its launch. The cloud needed a new model, and as we shall turn to next, that’s exactly what happened.

The cloud-as-innovation platform business model
The IT spending paradox
If the cloud, like electricity, is a simple fungible utility, then the prerogative of the cloud industry would be as straightforward as moving as much of the world's existing IT infrastructure into the cloud. In a way, this isn't a bad business to be in. According to Gartner, the global IT spending in 2010, just as cloud providers were getting started, was already at $3.4 trillion—by all means, a very attractive market to disrupt. But assuming the cloud ends up being cheaper than every business running their own IT infrastructure in-house, then the cloud would, in theory, stunt overall IT spending.
This, however, doesn't seem to have happened. Instead, the global cloud computing market has far outpaced global IT spending growth, surging from about $24.63 billion to $156.4 billion between 2010 and 2020—a 635 percent total increase or an annualized increase of roughly 20.3 percent. By contrast, global IT spending only saw a 2.14 percent annualized increase (to approximately $4.2 trillion in 2020) over the same time period.
This tenfold difference in spending growth suggests that businesses are moving to the cloud, not to save on IT costs but to enable new revenue-generating activities. In other words, they’re not just looking for cheaper infrastructure—they’re willing to pay a premium for cloud services that create new value.
In a 2016 interview, Microsoft’s top cloud executive Scott Guthrie said that Azure (Microsoft's cloud business) was no longer competing with Amazon on price but was instead “competing more in value.” And by “value,” Guthrie was referring to “the higher-level services, the features, the performance, and the ability to differentiate or deliver true innovations.” He noted that this marked a shift from “two or three years ago, where I think it was more about cost per VM (virtual machine) or cost per storage.”
What's clear is that from the days of the cloud price wars, the battle for dominance has shifted. Rather than competing on price, leading cloud providers are instead racing to create high-value, differentiated services—those that businesses can’t replicate on-premise.
Value-added services
For cloud providers, part of this shift involved an aggressive expansion of their ecosystems through strategic collaborations with leading hardware manufacturers, software vendors, and enterprise service providers. These advancements enabled cloud providers to offer services that go far beyond traditional IT functions.
Concretely, what do these differentiated, higher-level services look like? Let's walk through a few examples.
Platform-as-a-Service (PaaS)
In simple terms, PaaS is a layer of software built on top of basic cloud resources (like servers, storage, and networking) to provide developers with a complete environment to build, run, and manage applications without dealing with the underlying infrastructure.
Imagine opening a restaurant: instead of buying land, constructing the building, and setting up utilities (which is like managing raw infrastructure), PaaS is like leasing a fully-equipped kitchen where you can focus solely on cooking and creating new dishes. It handles the "back-of-house" operations—like managing the servers, getting them connected to storage, and equipping the system with the appropriate security measures—so developers can concentrate on writing code and launching applications faster.
In other words, PaaS enables programmers to bypass IT and launch, update, and maintain their software services on their own. Because of this, PaaS is considered a “higher-level” service, and so cloud providers charge more for it than just the underlying computing resources it uses.
Advanced hardware
If PaaS is about adding value at the top of the cloud stack, another way cloud providers offer differentiated services is by enhancing the bottom of the stack. One way to do this is by providing specialized hardware in their line-up—servers that are difficult for traditional IT departments to purchase, deploy, or manage on their own due to the complexity, cost, and expertise required.
One example of this is the specialized hardware required for AI clusters. Building an AI cluster isn’t as simple as stacking a few high-powered servers together; it involves assembling a highly complex, technically sophisticated system designed to handle massive parallel processing workloads. These clusters rely on specialized components like high-end GPUs (Graphics Processing Units) and ultra-fast networking technologies (e.g., InfiniBand technology) to ensure seamless data transfer between processors. Some of these components, like top-tier GPUs, are not only expensive but also in limited supply, making them difficult for regular companies to source.
Configuring advanced hardware can require deep expertise in data centers, which cloud providers have invested heavily in over the years. They have highly trained staff and R&D teams dedicated to improving data center uptime, energy efficiency, and cooling technologies. For most businesses, doing all this in-house would be cost-prohibitive and operationally overwhelming.
Infinite capacity
The cloud also enables businesses to operate as if computational limits don’t exist. Since cloud resources can be scaled up or down on demand, companies can offload their computationally expensive workloads into the cloud whenever their on-site infrastructure falls short.
Imagine a team of researchers at a biotech firm needs to run a protein folding simulation—a task that’s notoriously computationally intensive. (You could easily swap this out for a team of AI researchers training a new AI model.) Their on-site servers—represented by the dotted line in the graph—are sufficient for handling day-to-day operations, but running these complex protein folding simulations on that infrastructure would take an impractically long time (as represented by the blue line).
Rather than investing in additional hardware just to speed up one-off simulation projects, the biotech firm can offload the extra workload to the cloud, renting as many resources as they need to get the simulation done in a timely manner (represented by the red line). This not only accelerates research timelines but also empowers businesses to take on more ambitious, resource-intensive projects without being constrained by their existing infrastructure.
An innovation force multiplier
This isn't an exhaustive list of the ways in which clouds are transitioning to differentiated services. But the three examples should begin to give us a sense of why global cloud spending is far outpacing global IT spending, even before the AI boom of the past few years.
I've tried to illustrate the key idea in this diagram. In the cloud-as-utility model, where the cloud industry began, the cloud’s market was a function of the growth of software or traditional IT. When businesses expand their software needs—whether that's increasing the amount of data storage their company uses or increasing the number of Outlook users needed—the demand for cloud infrastructure would grow in parallel. In this way, the cloud wasn’t enabling fundamentally new types of business activities; it was simply a utility that offered a more efficient way to power existing IT operations.
By contrast, the cloud-as-innovation platform model flips this on its head. Instead of merely being a byproduct of traditional IT expansion, the cloud itself becomes the driver of new business activities and technological breakthroughs. In this model, the cloud isn’t just a more efficient way to run existing software—it’s the foundation that enables entirely new capabilities that wouldn’t be possible with on-premises infrastructure alone.
In other words, the cloud has transformed from a traditional cost center to a powerful source of competitive advantage and a force multiplier for innovation.
The cloud beyond AI
As should be clear by now, AI is one of the biggest examples of the kind of value-added service that the cloud-as-innovation platform business model has enabled. Unlike traditional IT workloads, AI requires vast computational resources, specialized hardware, and sophisticated software frameworks—resources that most businesses simply cannot build or maintain on their own.
If the cloud had remained locked in the cloud-as-utility model—focused solely on providing basic, commoditized infrastructure —it’s possible that we wouldn’t have the AI capabilities we see today. Training modern AI models, especially large-scale generative models, requires thousands of powerful GPUs running in parallel, along with complex data pipelines and optimized algorithms. The costs of acquiring, maintaining, and scaling this infrastructure are prohibitive to all but the biggest tech companies.
Since the release of ChatGPT, the cloud has often been portrayed as nothing more than an appendage to the generative AI boom. Discussions about its role have largely centered on AI training, inference workloads, and the computing power required to sustain large-scale models. What’s often overlooked, however, is that many of these trends began long before generative AI took center stage.
This brief history of the cloud’s evolution should remind us that (1) the cloud existed long before the AI boom, and (2) it serves as a platform for a vast range of innovations beyond AI. From scientists using it for protein folding research to advance drug discovery, to quants running complex financial simulations, to climate scientists modeling weather patterns, the cloud-as-innovation platform model has empowered industries across a wide range of fields with faster experimentation, deeper analysis, and more sophisticated simulations.
The AI boom has pigeonholed the cloud as nothing more than the platform on which AI is trained and deployed, creating a kind of myopia over the breadth of innovation that is possible with today’s technologies. If we continue to see the cloud solely through the lens of AI, we risk overlooking its broader potential—and, in doing so, limiting our collective imagination for what the future holds.
Further Reading
Behind the AI Arms Race: U.S. vs. China Cloud Computing Comparison
Pluralism vs corporatism: why countries innovate differently
DeepSeek part 1: How new labor practices propelled an unknown AI firm to the top
DeepSeek part 2: An Outlier in China’s AI Innovation Ecosystem
Bio:
JS Tan is a PhD Candidate at MIT’s international development program, researching the political economy of innovation in China and the US. He previously worked in the cloud computing industry as a software engineer.
His research delves into the intersections of labor, technological advancement, and economic structures, especially concerning big tech and green technology. This are all important topics for this publication.
Check out his work to learn more: Value Added Newsletter
“My work focuses primarily on the U.S. and China, with broad coverage of topics like industrial policy, green manufacturing, cloud computing, and technology’s role in the emerging productivist era. Labor and its impact on the political economy of innovation is also central to my work.”
Postscript
His essays are informative but his investigations have a lingering poetic ethereal quality about them that are difficult to pin down and ‘hang in the consciousness’ after consumption that makes his work more re-readable in my opinion. I do not fully understand what accounts for this effect (perhaps the advanced integration of AI).
I’m generally very bullish on academics like
and of High Capacity because they represent an increasing trend of academics, scholars and PhD students sharing their knowledge with a wider audience on Substack a trend that began in 2024 and seems to be continuing in 2025 augmenting Substack’s technology, business and China Tech insights in particular which is a major interest of mine and topic of this publication.Read “China’s overlapping tech-industrial ecosystems”
(K.Chan, 2025).