- No doubt about it, AI and the RAN was a hot topic at Mobile World Congress last week
- Intel has its own way of doing AI in the RAN – and it doesn’t require GPUs
- Mobile Experts’ Joe Madden expects most mobile operators will adopt a “light AI-for-RAN” approach
Telecom operators that are not so sure about Nvidia’s push for GPUs in the RAN might be interested in Intel’s version of AI in the RAN.
That’s what Intel is gunning for, anyway. Intel’s Xeon 6 system on a chip (SoC), code-named Granite Rapids, is Intel’s way of providing AI in the RAN that makes sense, according to Intel VP and Comms Solutions Group GM Cristina Rodriguez, who spoke with Fierce during Mobile World Congress in Barcelona last week.
There’s a lot of hype around generative AI for a reason, she said. “It’s super powerful,” which is good. “But it’s not good for everything. Sometimes I say it’s like working with a hammer, figuring out what to hit, what to use the hammer for.”
Operators right now don't appear to need a sledgehammer, that’s for sure. As we’ve heard from Intel before, energy-intensive GPUs from the likes of Nvidia are not necessarily necessary.
“We don’t need the super powerful, power-hungry GPUs. We just don’t need it, but we do need inference,” Rodriguez said.
That’s not all. Rodriguez said total cost of ownership (TCO) is top of mind for Intel’s customers and with Xeon 6, they can move to a one-server architecture.
“That’s game changing,” she said. “Imagine the cost reduction that that means for the operators. You can run your entire workload in one server, and on top of that, you still have capabilities there to run AI.”
Other Xeon 6 attributes: It delivers up to 2.4 times the capacity for RAN workloads with a 70% reduction in power consumption. The SoC supports up to 72 cores and is ready to be deployed this year.

“You don't have to wait for something that we don't know what it is,” she said. “You don't have to wait for it. You have it right now. You deploy your network right now with Xeon 6 SoC and you have a future-proof network where you can bring AI. You can bring innovations. You're ready basically for the next several years.”
Game changer, or no?
Essentially, Intel is proposing the Xeon 6 SoC as a virtualized RAN (vRAN) implementation for CPU-based AI inference. The operators that use vRAN can now upgrade to this Xeon 6 SoC and implement some AI optimization algorithms to improve RAN performance, said Joe Madden, chief analyst at Mobile Experts.
Is it a game changer? “I wouldn’t use such a strong label for it, but it is a good improvement in terms of integrating the vRAN-with-AI implementation. No need for a separate AI accelerator or CPU in some cases,” he told Fierce.
Madden acknowledged that the hype surrounding AI in the RAN can be difficult to penetrate. Nvidia and others are promoting the idea that centralizing the RAN and using a GPU-based approach can provide good performance along with an ability to create new revenue through enterprise AI workloads, he noted.
Light vs. heavy approach
While Nvidia’s plan sounds grand, not all telecom operators are crazy about the price tag. Verizon Chief Technology Officer and SVP Santiago “Yago” Tenorio told Fierce last week that GPUs don’t offer a performance boost over virtualized RAN and are expensive to use.
Madden underscored that sentiment. “Anyone that looks into the cost and power required for [Nvidia] Grace Hopper servers realizes that placing these servers in local network locations will get extremely expensive,” he said.
For now, he sees the rest of the industry following a different path – using CPUs for light AI inferences locally at the cell site. “This results in much lower power consumption and cost in the RAN. As a result, all of the major OEMs are moving in this direction and I expect 90% or more of mobile operators to adopt a light AI-for-RAN approach instead of a heavy AI-and-RAN approach,” Madden concluded.