-
Microsoft is rumored to be introducing an AI accelerator for data centers
-
This would compete with Nvidia GPUs like the H100
-
Microsoft is looking to lower the premium for delivering AI in its data centers with this chip
With rumors circulating about a new Microsoft artificial intelligence (AI) chip for data centers coming next month, Silverlinings checked in with analysts from AvidThink, J.Gold Associates and neXt Curve to get their take on what’s happening with the next AI silicon contender.
“What the rumor mill is saying is that it’ll be similar to Nvidia GPUs and that it has been in development since 2019,” Roy Chua at AvidThink told us in an email. “The goal is apparently to reduce cost of AI computing and not be beholden to paying premiums to Nvidia and other external suppliers,” he added.
As is common in the cloud realm, the urge to reduce costs while maintaining performance can often cause vendors to develop new silicon.
“By creating their own chips optimized for their infrastructure, they can provide lower cost AI instances than with off the shelf solutions from the big chip vendors,” Jack Gold at J.Gold Associates explained. “It’s the same path that all the hyperscalers take with general cloud instances where workloads are optimized for less heavy workloads where the other chips could be overkill.”
Leonard Lee at neXt Curve agreed that a Microsoft data center AI accelerator would compete directly with Nvidia’s H100 GPUs. “A more important question is how will Microsoft design their rumored accelerator into an optimized and integrated systems stack as Nvidia has,” he said.
“If Microsoft can realize a lower cost with their own silicon and AI supercomputing systems, it will be a boon that will become an eventual necessity due to growing concerns about sustainable AI. Nvidia’s current premium itself is likely not financially sustainable for Microsoft,” he concluded.
Will Microsoft manage to deliver lower cost AI, at least for its own data centers? Stay tuned, we may well find out next month.