Fierce Network TV

AI, Cloud, and Data Centers: Powering the Future of Telecom

Join us with Ampere and Arrcus as they discuss the evolving role of AI in data centers, cloud environments, and telecom networks. Learn how AI is driving efficiency and innovation through predictive analytics, autonomous operations, and improved workload management. Discover the potential of AI in edge computing and 5G, and how telcos are leveraging AI to transform their infrastructure and offer new services. Don't miss out on this deep dive into AI's future in the tech industry.
 


Alejandro Piñero:

All right. Welcome back everyone to Fierce Network TV here in Barcelona covering MWC '25. The sun is out outside and the sun is out here as well, so the sunglasses are staying in by popular demand. I'm Alejandro Piñero, and I'm joined now by Jeff and Sanjay from Arrcus and Ampere. Gentlemen, thanks for coming over and visiting us today.

Sanjay Kumar:

Thank you.

Alejandro Piñero:

Right. So let's talk about AI and everything that's happening there with data centers and cloud. That's a huge topic. We have a few minutes to cover it, so challenge there. Can you talk about, first, about the role of AI in that data center and cloud evolution? And perhaps, Sanjay, why don't we start with you?

Sanjay Kumar:

Sure, happy to. So if you take a look at it, I think AI can be used in a couple of different ways in the data center. One is the availability of predictive analytics, and that can help with autonomous operations, it can help with day-to-day operations being made much simpler. At the same time, it can also help you with real-time changes to things like, let's say, cooling at the data center. In fact, Google has claimed that they've been able to reduce power consumption by what, 40% in a data center.

Another area is that you can use AI and the analytics for proactive threat management from security perspective and be able to identify the threats and then mitigate them.

And then last but not least, you can use AI to really do much better workload management and deliver much better service outcomes as well.

Alejandro Piñero:

And Jeff?

Jeff Wittich:

Well, I think those are really good examples. And at Ampere, we've seen a lot of people using AI for those types of applications in cloud and in big data centers. And I think what's really intriguing right now is the fact that those types of AI use cases and benefits aren't just for cloud and for data centers, but we're seeing that now showing up in the edge and then pushing out into telco. And so we're seeing really the ubiquity of AI in all of these environments. And that's what we've been building at Ampere, is some really great, efficient, high-performance hardware to go and run those types of AI applications.

Alejandro Piñero:

Awesome. And Sanjay, I know you have some great examples of how this comes to life, what the impact is on telecoms. So yeah, care to share with us.

Sanjay Kumar:

Yeah. And it's really exciting to be here at MWC to be able to share what's going on. In fact, stuff that we've been doing is around the convergence of 5G and AI. So we take a look at it, the telcos today have an enormous opportunity with being able to monetize AI. For one, you've got this massive data center build out and that needs connectivity. So the ability to provide backbone networks, fiber. To connect all these data centers is one massive opportunity for the telcos.

And at the same time, there's also this move towards a more distributed AI. You simply don't have big enough power grids for centralized data centers, plus you also have the move away from training to more of inferencing at the edge, which means for telcos, really this is a fantastic opportunity for them to be able to leverage their regional data centers, edge data centers as well, and be able to deploy things like GPU as a service. They can use that same infrastructure to deploy edge compute inferencing services, real-time services, and they can, at the same time, leverage the same infrastructure to drive efficiency for their own capabilities like, let's say, AI-powered RAN and then monetize that for AI as well.

So we've done some work with Liberty Global, in fact, that we've just showcased, which brings all of these things together. And it's a collaboration that showcased 5G along with AI deploying edge capabilities for inferencing.

Alejandro Piñero:

Excellent. And Jeff, not necessarily something we'd associate with less power consumption, right? So how can AI play a role there?

Jeff Wittich:

Yeah, that's incredibly important. You mentioned the fact that what we're seeing is a move from AI training to AI inference as those models need to run at scale. And that means that the amount of compute that's going to be required for AI inference is going to go up dramatically over the next couple of years. And that AI inference is going to run everywhere. It's going to run on your devices, it's going to run at the edge that's going to run in telco deployments and clouds and data centers all over the place. And so power is probably the number one constraint to this. It's our biggest challenge today, both the availability of power on the grid, the amount of power generation capabilities we have.

So in order to be able to take advantage of all these advancements in AI, we need to provide more performance, but at the same or less power than we've been doing with legacy hardware. So that's what we've really focused on at Ampere, is deliver multiple times more AI performance in the same or smaller power envelope, that way telcos and others can take advantage of this without needing to overhaul their entire infrastructure, which in many cases is not even possible, there just isn't enough available power today.

Alejandro Piñero:

Excellent. So just to wrap up in one sentence or two, what's next for AI and its role in evolving those data centers in cloud systems? Jeff?

Jeff Wittich:

Yeah. Well, what we're really excited about at Ampere is that increasing deployment of AI within telco. And that's why I'm really excited about the partnerships that we've announced with people like Canonical and SUSE, SynaXG, Fujitsu, Supermicro, all of those ecosystem partners that are required to build an easily deployable platform using Ampere CPUs out in a telco environment.

Alejandro Piñero:

Excellent. Sanjay?

Sanjay Kumar:

So at Arrcus, what we provide is a networking fabric that connects all of these distributed AI workloads together, whether they're running at the edge, in the core, in the data centers, or in the public cloud. Essentially, you need to be able to connect all of these so that you can seamlessly access these workloads. And that's what we provide at Arrcus with a completely programmable fabric that runs on any kind of form factor, and that drives much more efficiencies for the customers and then helps them achieve their business objectives.

Alejandro Piñero:

Excellent. Well, listen, gentlemen, thank you so much again for dropping by and taking the challenge of talking about such a complex subject in a short time. Thanks again.

Sanjay Kumar:

Thank you.

Jeff Wittich:

Thanks.

The editorial staff had no role in this post's creation.