- Open source AI is gaining traction among operators, a new Nvidia survey found
- Bringing things in-house will give telcos more control
- The AI skills gap remains a hurdle for operators
Move over, ChatGPT. Operators are turning away from proprietary solutions and looking to do more artificial intelligence (AI) work in-house with open source models.
According to a Nvidia survey of 450 telecom professionals around the world, the percentage of operators planning to use open source tools jumped from 28% in 2023 to 40% in 2025. Similarly, the percentage of respondents who indicated they will build AI solutions in-house rose from 27% in 2024 to 37% this year.
“They’re really looking to do more of this work themselves,” Nvidia’s Global Head of Business Development for Telco Chris Penrose told Fierce. “They’re seeing the importance of them taking control and ownership of becoming an AI center of excellence, of doing more of the training of their own resources.”
This, of course, is a bit easier said than done.
Penrose noted that the AI skills gap remains the biggest hurdle for operators. Why? Because, as he put it, just because someone is an AI scientist doesn’t mean they are also necessarily a generative AI or agentic AI scientist specifically. And in order to attract the right talent, operators need to demonstrate that they have the infrastructure that will allow top-tier employees to do amazing work. See also: GPUs, data center infrastructure, etc.
AvidThink Founder and Principal Roy Chua noted one of the biggest undertakings operators will have when using open source models is vetting the outputs they get during training.
But having skilled talent matters for more than just training AI. Penrose noted that with the rise of agentic AI, operator engineers will need to figure out how to link their in-house models with those offered by partners.
“It’s not going to be one AI, it’s going to be a bunch of AIs,” Penrose explained. “And so, one of the big things telcos are going to need to think about is how do they interface and link these AIs … how do I stitch these things together? When do I invoke each of these to do what type of work? That’s the next thing that they need to be thinking about.”
AI-RAN
Getting it right will be critical as operators both look to AI to generate new revenue streams from external-facing services and streamline their own internal operations.
One key area where operators are looking to deploy AI is in the radio access network (RAN). Just a year after the launch of the AI -RAN Alliance at Mobile World Congress 2024, a whopping 66% of operators said they are looking to deploy AI services on the RAN. Another 53% are exploring the use of AI to improve spectral efficiency.
But there are different ways to pair AI and the RAN. Chua noted that there's "AI on the RAN" (aka running AI workloads on the network), "AI for the RAN" (which is where spectral efficiency and capacity improvements feature) and "AI and the RAN" (which is where RAN and enterprise workloads share the same compute resources). And while AI on the RAN is relatively straightforward, the others are a little more tricky.
According to Penrose, Nvidia and partners like Fujitsu have already proven it’s possible to run the RAN on accelerated compute infrastructure like Nvidia’s GPUs and Aerial platform. That work has mostly focused on Layer 1 of the network (aka, the physical layer). Now, efforts are focused on Layer 2, also known as the data layer, to boost spectral efficiency.
“That’s a huge deal, that’s super powerful,” Penrose said of the ability to make better use of limited spectrum resources.
So, where do things stand today in terms of deployments? Penrose said Nvidia has been public about AI-RAN deployment efforts with T-Mobile and SoftBank, but is also in conversations with other unnamed operators around the world on this front. He added part of the slowness in moving is that some operators are looking to time AI-RAN deployments with their usual investment cycles. The maturity of solutions from preferred partners is also a factor, he said.
“True field deployments are still at very early stages … But I think you’re going to see more live field going out this year and scaling next year,” Penrose concluded.
For what it's worth, Chua is a little skeptical that we'll see anything beyond proof-of-concept trials for AI and the RAN this year. Rest assured, we'll have our eyes peeled at MWC for new network AI use cases in a few weeks!