- As hyperscalers scramble to bring more AI compute capacity online, GPUs are the chip of choice
- Nvidia’s CEO Jensen Huang said that’s why demand for its older Hopper platform remains strong even as the industry awaits Blackwell availability later this year
- Analyst Roy Chua said Hopper sales are also benefitting from Nvidia’s ecosystem lock-in
Beggars can’t be choosers. That was basically what Nvidia CEO Jensen Huang said during the company’s earnings call when asked why demand for its older Hopper chip remained so strong despite the impending availability of its top-shelf Blackwell chip later this year.
“If you just look at the world's cloud service providers, the amount of GPU capacity they have available, it's basically none,” Huang said. He also explained that the GPUs hyperscalers do have are already being used for things like internal data processing and model training, so they’re scrambling for more compute power.
That’s a problem since Nvidia is now expecting to ramp up production and shipments of Blackwell in Q4 and into early next year — later than expected.
“Although Blackwell will start shipping out in billions of dollars at the end of this year, the standing up of the capacity is still probably weeks and a month or so away…And so, everybody is just really in a hurry,” Huang said. “H200 is state-of-the-art. ... If you have a choice between building CPU infrastructure right now for business or Hopper infrastructure for business right now, that decision is relatively clear.”
But AvidThink founder Roy Chua noted that, of course, Huang would say that. "Smart companies will want to sell what's near-term available instead of delaying customer purchases waiting for what's better," he said.
Asked why companies would go with Hopper rather than a GPU from AMD, for instance, Chua said it’s all about what’s already installed. Those who have already bought into Nvidia’s CUDA compute platform don’t want to change their training pipelines and workflows to bring in another provider if they don’t have to. And since the Blackwell delay appears to be a short-term one, Chua said it just makes sense for companies to try to get their hands on more Hopper chips to bridge the gap.
Plus, he added, doing so comes with the added benefit that it “strengthens the NVIDIA relationship so they can get priority allocation when Blackwell ships.”
Capex cascade
Huang went on to say that while CPUs have historically dominated data centers, the next $1 trillion worth of infrastructure will be very different, largely because of the need for accelerated computing.
“In the future, every single data center will have GPUs,” he said.
Futurum Group CEO Daniel Newman told us it’s not entirely clear how that next trillion of spending “translates to meaningful deployment of AI in business and for consumers.”
“But the companies that are making these big investments aren’t thinking a few quarters out, they are thinking about business viability over the next decade-plus,” he concluded.