Here's what AWS, Microsoft and Google's capex plans mean for data center growth

  • Hyperscalers will account for nearly 60% of data center capacity by 2029 and half of global data center capex by 2026
  • AI is the primary driver behind data center growth
  • The expectation that AI workloads will shift toward inferencing shouldn't impact data center growth forecasts, analysts told Fierce

If you thought AWS, Google and Microsoft were big now, just wait until 2029. Though hyperscalers only accounted for around 20% of all global data center capacity in 2017, that number is set to hit nearly 60% by 2029, according to new forecast from Synergy Research Group.

Data center capacity refers to megawatts of compute power available to run workloads. On-premises data centers used to account for around 60% of data center capacity in 2017, but as of 2023 that number had fallen to 37%. By 2029, Synergy believes on-prem data centers will account for just 20% of global capacity.

Meanwhile, the share of capacity provided by colocation companies, which in 2017 stood at just under 20%, is expected to dip relatively slightly. Synergy noted this is not because the availability of colo megawatts is shrinking but rather because colo capacity is expected to remain about the same as hyperscale capacity nearly triples by 2029.

data center capacity forecast

As we’ve noted previously, hyperscale cloud players are planning to ramp capital spending to build more compute capacity in response to demand generated by technologies like artificial intelligence.

Indeed, a recent forecast from Dell’Oro Group noted worldwide data center capex is expected to grow at a 24% compound annual growth rate (CAGR) through 2028, with four companies — Amazon, Google, Meta, and Microsoft —accounting for half of global data center capex as early as 2026.

Baron Fung, senior research director at Dell’Oro, told Fierce that artificial intelligence (AI) is the primary driver of data center capex. Without it, he said, the segment’s CAGR would fall to about 10% over the forecast period.

“The largest capex contributor for AI infrastructure are servers (with accelerators such as GPUs). But AI infrastructure may also include specialized networking, storage, and facilities” like new data centers, Fung said.

John Dinsdale, Synergy’s chief analyst, noted growth in cloud infrastructure services, software-as-a-service, online gaming, e-commerce and video services are also driving demand for data center capacity. For now, AI is the cherry on top, but eventually it will be incorporated into all of these things, he said.

Could AI at the edge derail everything?

We recently explored the idea of what will happen to certain data centers — dubbed AI factories — when AI processing shifts toward inferencing at the edge. Asked whether this expected transition is likely to put a dent in their forecasts, Fung and Dinsdale said no.

“Inferencing at the edge complements rather than cannibalize large training AI clusters,” Fung explained.

He continued: “Training models will get bigger over time. We see these AI clusters growing from tens of thousands of interconnected GPUs or accelerators, to potentially a million accelerators of GPUs.” And at the edge, there will be more and more devices making queries on the trained models. So, there will still need to be plenty of compute to handle those requests, he said.

Dinsdale added that while it’s true that inferencing places more of an emphasis on performance and latency rather than sheer compute power, the idea that data centers will need to be basically on top of consumers is misleading.

“With inference workloads there is more need to have the data centers closer to customers, so closer to major metros which have a large concentration of corporate locations and consumers. But in that scenario, the ‘edge’ data centers can still be huge facilities - or they can be smaller local zones, points of presence or CDN-type nodes in colocation facilities,” he concluded.