AI

Optics, not GPUs, are Nvidia's big GTC news

  • Nvidia talked up its new Rubin Ultra setup at GTC this week
  • It tried to head off concerns about rack power by showcasing new silicon photonics and co-packaged optics
  • These can help cut power consumption in other parts of the rack, but come with risks, analysts said

We knew it was coming but boy is Nvidia’s latest baby bigger than we expected. During its GTC conference this week, Nvidia revealed its forthcoming Rubin Ultra GPUs (coming in second half of '27) will require 600kW of power per rack. While that could in theory present a problem for power-constrained data centers, Nvidia was quick out of the gate with a solution: new photonics that can cut power used today by the transceivers running between GPUs.

"Energy is our most important commodity," Nvidia CEO Jensen Huang said during his keynote address.

Today, he said each GPU requires something like six transceivers at $1,000 apiece. The obvious question in a world where AI requires hundreds of thousands or millions of GPUs is “how do we scale up with that” given the added cost and energy usage from the transceivers.

Nvidia’s solution? Its new co-packaged Spectrum-X and Quantum-X silicon photonics networking switches (coming in the second half of this year) for Ethernet and InfiniBand platforms, respectively. Huang said these are designed to replace legacy transceivers and could “save tens of megawatts” in a data center. And of course, saving even “six megawatts is 10 Rubin Ultra racks.”

Nvidia and its optical play

Without going too far into the weeds, it’s important to know that existing power constraints and the expectation of rising demand from AI have been driving switching vendors (think Cisco, Nokia, Arista, Juniper, HPE and Celestica) to explore new optical interconnect options.

Co-packaged optics (CPO) are one option, but – as analyst firm AvidThink noted in a recent Data Center Networking report – there are still concerns about the tech’s maturity and business models. Dell’Oro Group VP Sameh Boujelbene added the industry has been working to address these roadblocks as well as issues around “serviceability, manufacturability and testability.”

In the meantime, vendors have instead pursued Linear Receive Optics (LRO, which is more of an evolution than revolution from existing solutions) and Linear Pluggable Optics (LPO, tech focused on simplicity, which is good for short-range applications). 

Roy Chua, founder of AvidThink, told Fierce that Nvidia’s announcement was an “important push” both for the company itself and for co-packaged optics more generally into the market. And it’s true, he added, that reducing the number of transceivers in a data center can cut costs and power consumption.

As far as how big a lift doing so is in practice, Chua noted that Nvidia’s solution is on the switch side only, leaving pluggable optics still in use on the server/NIC side of the equation. Thus, “you don't need to replace the pluggables on the server side, just swap the switches to reduce the power consumption,” he said.

Paul Nicholson, Research VP for Cloud and Data Center Networks at IDC said customers may buy Nvidia's switches for AI-focused greenfield sites. Elsewhere, it'll come down to whether the economics are in line or the extra compute density is needed, he added.

Boujelbene noted that “data centers will likely adopt a hybrid approach, integrating both pluggable and co-packaged optics.”

“The industry is more power-constrained than ever. This is driving faster innovation on the supply side and bold risk-taking on the demand side,” she concluded. “If CPO delivers the expected power savings over pluggable optics, hyperscalers will likely be eager to deploy it – despite the associated risks.”

What’s your take? Will data centers move aggressively toward co-packaged optics, or will hybrid models dominate? Let us know in a letter to the editor: [email protected].

Updated 3:23 pm ET: This story has been updated to include a comment from IDC.