- Mavenir has developed a GenAI co-pilot for telecom AI network operations with Nvidia and AWS
- Analysts say these systems will start to arrive in network operations in 12-18 months
- T-Mobile is already using AWS for initial operations in this area
Get ready for generative artificial intelligence (GenAI) to make an appearance in operator network management within the next year.
To that end, open radio access network (open RAN) pioneer Mavenir has dived further into the world of telecom GenAI and machine learning (ML), developing co-pilot software in collaboration with AI chip maven Nvidia and cloud provider Amazon Web Services (AWS) for managing network operations.
“The operations co-pilot aims to assist engineering and operations teams within [communication service providers] in analyzing the large volume of diverse data gathered from the network and ensure that personnel can quickly diagnose faults and provide appropriate fixes during failure events or when the network is operating sub-optimally,” Mavenir’s Bejoy Pankajakshan, EVP, chief technology and strategy officer told Fierce in an email.
“The co-pilot is a part of the NIaaS (Network Intelligence as a Service) framework that has the various building blocks for AI/ML workflow and that can extract value from the diverse sources of network data to identify patterns in historical data, make inferences and predictions, and provide early warnings about potential hardware failure,” he explained, adding that the software that can offer insights that can help enable preventive maintenance.
Pankajakshan noted that Mavenir hasn’t forgotten its open RAN roots with its latest AI software. “The NIaaS framework with the inbuilt co-pilot applies AI/ML to the data gathered across the Open-RAN front haul, mid haul and back haul interfaces to enable AI/ML insights for network operations and assist Operations and Engineering teams in ensuring that the network is functioning at its optimal performance,” he said.
Principal analyst at AvidThink, Roy Chua, told us that Tier-1 operators are already running network operations GenAI pilots. He said that such co-pilots would likely arrive in “12-18 months for the low-hanging fruit” and “in the late 2025/2026 timeframe” for more general production AI network operations software.
“There are mobile operators using GenAI for some aspects of network operations,” Chua said. “One public example is T-Mobile, which happens to be an AWS customer for this use case,” he noted.
As for how operators will use the GenAI software initially, analysts say that AI network operations will start simple. “From an operations standpoint I see field operations (troubleshooting), network operations (troubleshooting, capacity planning, predictive maintenance, optimization),” Chua said. “For field operations, it would be checking manuals or providing immediate context as to what could be wrong in the field.”
“Simply put, in low risk activities and functions with little potential impact on operations,” agreed Next Curve executive analyst Leonard Lee in an email. “Anything that is critical needs to be proven within the context of the operator’s environment,” he said.
“The key challenges are data quality and integration, which are consistently brought up as adoption barriers by operators I have spoken to,” Lee noted.
Any CSP will benefit from the co-pilot software, Mavenir’s CTO claimed, although: “Those operators with open RAN will derive greater benefits due to the availability of rich and granular data across its various interfaces,” he claimed. He noted that a pre-release version of the software is available now, with a commercial release “forthcoming."
The analysts couldn’t agree on whether Mavenir’s co-pilot is firmly tied to Nvidia and AWS. “Their operation co-pilot framework should be cloud portable/agnostic but appears to be founded on Nvidia AI Enterprise making it tied to the hip with the Nvidia stack,” Lee said.
“Not necessarily,” Chua countered. “There are increasing number of GPU [graphical processing unit] options (given the strong demand), certainly Nvidia will lead the pack for now, but Intel, AMD, and other companies are building AI accelerators. Likewise, with workflow efficiencies, quantization and model pruning, or reduction in model size (specialized models) could run fine on CPUs [central processing units].”