-
Power is one of two major bottlenecks data centers face when it comes to supporting AI needs
-
Power refers to both power supply to the data center and distribution within it
-
Retrofitting is hard, so hyperscalers are designing new data centers for AI power needs from the start
A data center is a data center is a data center – at least from the outside. But Schneider Electric executive Joe Reele and Dell’Oro Group Research Director Lucas Beran told Silverlinings artificial intelligence (AI) is driving big changes on the inside of data centers used for the technology. One particular element that’s changing is the demand for power.
When it comes to data centers, there are two types of power that are typically discussed: power to a data center (frequently referred to as available power or power availability) and power within a data center (i.e. power distribution). As it turns out, AI is fueling greater demand for higher levels of both.
That’s because AI relies on chips like GPUs that consume more power. The greater the number of these chips in a rack, the higher the power level to that rack needs to be. And the greater the number of higher-power racks in a data center, the more power it needs run to it.
Racking up the kilowatts
The thing is, AI is set to jack up rack power density a lot. (Beran also flagged cooling needs as another potential bottleneck, but we covered the rise of liquid cooling here, here and here, so won’t rehash that point.)
“Average rack power density is less than 10 kilowatts a rack [today],” Beran explained. “A single H100 [GPU] from Nvidia is more than 10 kilowatts and you’ll probably want to put multiple of those in a rack.”
Thus, data center providers are facing a leap from perhaps 10-15 kilowatts per rack to between 40 and 100 kilowatts per rack, Beran and Reele said.
Reele, who is VP of Solution Architects at Schneider, noted that retrofitting existing data centers to meet those needs is a major lift.
“To take a normal data center that’s got 8 kilowatts per rack and all of the sudden do the entire data center at 40 kilowatts per rack, that’s significant and very disruptive,” he said. “If I had an 8 kilowatt rack, for example, I might have two power cables going to it. Well now if I have 40 [kilowatts], that’s a lot more power.”
According to Reele, Schneider Electric – which supplies data center physical infrastructure for power, cooling and more – is seeing hyperscalers increasingly design data centers for AI from the ground up. He said some are already online and coming online, while others are still in development. He added it’s not seeing the same from colocation providers or enterprise customers yet.
BYO power?
Beran noted there are a limited number of vendors who can supply the higher amperage power cables required for AI racks. But for him, the bigger issue is power availability from utility companies. That is, he’s worried data centers won’t be able to get as much power as they need to distribute in the first place.
“There’s a little bit of denial on the power availability issue,” he said. “It’s known that we’re running into power availability questions…but securing capacity for future data center deployments is becoming increasingly difficult.”
Beran noted the concept of “bring your own power” is emerging in the industry, which essentially refers to data centers creating their own microgrids using a mix of technologies. This idea is still in its infancy, but can include things like battery power in the short term and eventually fuel cells and modular reactors.
He concluded that battery power for data centers is more of a 2024 solution, with fuel cells following and modular reactors perhaps coming down the line around 2030.
Want to discuss AI workloads, automation and data center physical infrastructure challenges with us? Meet us in Sonoma, Calif., from Dec. 6-7 for our Cloud Executive Summit. You won't be sorry.