Fierce Network TV

AI in Telecom: Balancing Automation, Oversight, and Security

At MWC25, VIAVI’s Takai Eddine Kennouche explored the challenges and opportunities of AI in telecom. While automation can revolutionize network operations, full autonomy remains a distant goal. AI still struggles with reliability, data inconsistencies, and security risks, requiring human oversight to prevent costly errors. According to Kennouche, the industry’s biggest challenge is structuring telecom data for AI-driven automation, followed by the computational power needed to process and refine models effectively.

With some carriers already implementing advanced AI solutions, questions around regulation and security are growing. Kennouche warned that deregulation could increase risks, from cybersecurity threats to AI "hallucinations" disrupting networks. He emphasized the need for a balanced approach—leveraging AI’s power while ensuring human oversight and strict security measures. Want to learn more about AI’s evolving role in telecom? Watch the full interview now.


Steve Saunders:

Welcome back to FNTV at MWC 25. I am Steve Saunders and I'm excited to welcome Takai Eddine Kennouche, Lead Architect for AI in the Office of the CTO of VIAVI, which is a test measurement and assurance company. Nice to see you.

Takai Eddine Kennouche:

Likewise, Stephen.

Steve Saunders:

We're hearing a lot of noise about AI and machine learning, of course. Should we just welcome our robot overlords and let them do their own thing autonomously? Or do we still need to have some human oversight of these technologies?

Takai Eddine Kennouche:

Not quite yet, unfortunately. Yeah, we still need human oversight. I think at least as far as the telecommunication industry goes, there is still a long way to go to let AI capabilities or agents, or whatever we call them nowadays, take full reign and control over the end-to-end network. There are a lot of complexities, a lot of uncertainties. There is still some issues with reliability and factuality that needs to be engineered into reliable solutions. But with engineering comes uncertainty as well, hence, my prediction that we still need human involvement in the loop for quite some time to go.

Steve Saunders:

Yeah. What would you say the number one challenge that we need to overcome is before we can implement level four and level five autonomy?

Takai Eddine Kennouche:

I think putting some order in the mess that is the data and the telecospace. A lot of systems, a lot of heterogeneous data sources, a lot of data movement would be required to train and use the kind of models nowadays, we think that might take us up the ladder of the automation. So, data is number one and compute is number two. All these data needs to be processed, needs to be cleaned, needs to be used for training these models. These models needs to be tested again. And this is a continuous load. This is not a one-shot development of a software capability. This is something that's going to be going on and on and on in an operational [inaudible 00:02:05]. So you can imagine how this cost could be quite and interesting, a problem to deal with. Right? Yeah.

Steve Saunders:

So, let me just get this clear. A lot of training needs to happen and we're talking about using bespoke models, essentially vertically trained for the industry which they're operating in. So, this isn't going to be like ChatGPT where I ask it a question and it gets its spectacularly wrong sometimes, I hope.

Takai Eddine Kennouche:

Yeah. So, good example. Great example, actually. Imagine taking ChatGPT, putting it to operator networks, and it spouts that kind of nonsense in a network and controlling left and right, your RAN elements, your coordinates. Chaos would ensue, right? So, that's why I'm saying there is a need to train in a live telecom network. There is a need, what we call fine-tuning, right? Capturing data that is context specific for a task that you're interested in that has a business value for you. And you still need to fine-tune whatever generic off-the-shelf AI capabilities you have on that specific domain. Otherwise, you still have to deal with hallucinations, nonfactual answers.

Steve Saunders:

Oh, I have enough of those already.

Takai Eddine Kennouche:

Oh, yeah. Right?

Steve Saunders:

[inaudible 00:03:15]. Yeah. So I guess I know that there are two carriers, China Mobile and Telefonica, Brazil, that have enabled level four autonomy in their core 5G network. But I'm assuming that would only be possible if you did it in a very-

Takai Eddine Kennouche:

[inaudible 00:03:33].

Steve Saunders:

... vertical way with there probably is some human oversight in there still, do you think?

Takai Eddine Kennouche:

Yeah, I would imagine so. I think you can do closed-loop automation for automation maybe in a narrow context where maybe if a disaster happens, it's not going to be a big one. And also probably that is a human oversight. Human oversight to intervene when there is a suspicion of something not working quite right, an oversight in order to fine-tune the closed-loop, make sure that maybe the models are refreshed, are updated, are reasoning in the right manner about the problem at hand. And that's why human oversight is still needed for exactly this kind of situations.

Steve Saunders:

We have a new administration in the United States, and we also have a lot of influence from big tech companies, all of which adds up to a lot of deregulation. We're heading into a sort of deregulated market. I mean, for me, that seems like it presents some potential risks with security, with privacy, with compliance in the AI environment in North America. Do you agree? Are you worried about it?

Takai Eddine Kennouche:

I am. I am worried about it and it's an interesting... It's hard to predict how all of this will boil down to, because the technical challenges are the same, are facts, our scientific facts. AI needs data, data bad, garbage in, garbage out. There is training needs to be done. We don't have the all-known AI yet, so these are realities and facts any engineer or billionaire products would know.

Now, the deregulation, maybe it's a way maybe to incentivize innovation people to try things out, and if they break, that's okay. But there is Europe on the other hand where the regulation going in the other sense. The AI Act is an interesting situation where some people are feeling over-regulation. But overall, I would say there are intrinsic risks in AI. Whether you heavily regulate or deregulate, you need to at least be aware of those. And these are cyber security risks, national security risks.

Just imagine with me if you have a generative AI model, monitoring and controlling your network operations, and it hallucinates like some Tesla cars a few years ago when they claimed full automation, hallucinated by looking at road signs on the side. And somebody modified the pixel distribution on the road signs. It turned right instead of left, or something like this. Imagine if that is projected network operation and it turn off sales instead of turning them on. So, these are kind of issues that could be manipulated and triggered by malicious users. So, yeah.

Steve Saunders:

Amazing. Thank you so much. Real education, lots of things to think about. Great talking to you.

Takai Eddine Kennouche:

Yeah, absolutely.

Steve Saunders:

Thanks for being on the show.

Takai Eddine Kennouche:

Likewise. Thanks, Stephen.

The editorial staff had no role in this post's creation.