-
Cisco is providing servers, Ethernet networking and professional services combined with NVIDIA software and GPUs to drive enterprise AI
-
Cisco Validated Designs (CVDs) provide tested configurations to “largely take the guesswork out of deploying infrastructure for AI,” Cisco says
-
NVIDIA still loves InfiniBand, but wants to hedge its bets, an analyst says
You don’t have to be a supergenius to see that building infrastructure for AI takes a lot of thought.
“AI isn’t just any workload. It’s data-intensive and compute-intensive and if you get it wrong you can waste a lot of your data scientists’ time and cost your enterprise a lot of money,” said Zeus Kerravala, principal analyst at ZK Research.
The industry has been in a rush to disaggregate network and compute components, but that’s made it difficult to assemble all the pieces needed to ensure hardware and software are able to run required workloads, Kerravala said.
Cisco and NVIDIA are working to simplify those problems. The two companies launched a partnership Tuesday at the Cisco Live conference in Amsterdam to help enterprises easily deploy and manage secure AI infrastructure, combining NVIDIA AI Enterprise software, which supports building and deployment of advanced AI and generative AI workloads, with Ethernet networking solutions and servers from Cisco running NVIDIA GPUs. ClusterPower, a cloud service provider in Europe, is deploying the technology for data center operations using AI and ML, according to Cisco and NVIDIA’s joint announcement.
Enterprises will be able to deploy the hardware and software in the data center and in edge locations to locate compute and GPU resources close to where data is generated, said Jeremy Foster, senior vice president and general manager, Cisco Compute.
The two vendors are incorporating NVIDIA’s newest Tensor Core GPUs in Cisco M7 generation UCS rack and blade servers, and providing jointly validated reference architectures through Cisco Validated Designs (CVDs), including CVDs for FlexPod and FlashStack for Generative AI inference. The CVDs, incorporating technology from partners including Pure Storage, NetApp, and Red Hat and “will largely take the guesswork out of deploying infrastructure for AI,” according to a blog post from Cisco.
“Customers know that they’re using validated design that has been tested, and that Cisco can also help support that validated design,” Foster said. “We know how all this stuff works and how to put this stuff together.”
But what about InfiniBand?
The partnership with Cisco for Ethernet doesn’t mean that NVIDIA is losing faith in its own Infiniband networking, said Ron Westfall, research director, Futurum Group. That’s a $10 billion business for NVIDIA, delivering latency and bandwidth requirements of key customers such as Microsoft. But NVIDIA is hedging its bets, as hyperscalers including AWS, Azure, Google Cloud and Meta develop their own AI chips, and industry-wide initiatives such as Unltra Ethernet Consortium focus on addressing AI workloads.
Cisco and NVID face a host of competitors, including, for Cisco, Arista, Juniper and HPE Aruba, Broadcom VMware, Dell and Extreme. And NVIDIA faces competition from AMD Instinct MI300 and Intel GPU Max series processors, as well as the emerging hyperscaler GPUs, Westfall said.