Cloud

F5 with NVIDIA Accelerates AI workloads and Application Delivery


Following F5's announcement with NVIDIA, this article has been updated to correct the product name as Big-IP Next for Kubernetes.


In a landscape where telecommunications providers are continuously adapting to evolving demands, F5 stands out as a committed partner for transformation.

As the industry faces unprecedented challenges—from shifting customer expectations to geopolitical dynamics—F5 is leading the charge in supporting service providers and enterprises alike in harnessing the power of Artificial Intelligence.

We spoke with Ahmed Guetari, VP and GM, Service Provider at F5, to delve into the company’s strategic focus on AI. F5 has consistently enabled enterprises to navigate pivotal shifts, from the rapid digital transformation accelerated by the pandemic to the expansion of 5G networks. Now, the firm is set on leveraging AI to drive operational excellence.

Guetari highlighted three key advantages of F5's innovative solutions:

  1. Seamless AI Integration: Streamlining the incorporation of AI clusters within existing data centers.
     
  2. Enhanced Security: Fortifying multi-tenancy setups to protect sensitive data.
     
  3. Optimized Resource Utilization: Increasing the efficiency of CPU and GPU resources to meet the demands of modern applications.
     

"Currently, CPUs typically operate at just 20% to 30% of their capacity handling networking and security tasks," Guetari noted. "By offloading these functions, we can liberate a significant portion of CPU resources, especially during data-intensive processes like decryption."

In an era where CPU costs are escalating, these optimizations are crucial for maximizing performance, reducing energy consumption, and ensuring that infrastructure investments yield the highest returns.

For further insights and to learn more about F5’s vision for AI in telecommunications, dive into the full conversation now! 
 


Steve Saunders:

Hey, Ahmed, I understand that F5 has some big news. What's going on? 

 

Ahmed Guetari:

Indeed, we have really some great news to share with everyone today. Over the last few decades, F5 has been helping customers through the major market inflections, and today we have another opportunity to help our customers through the biggest market inflection, which is AI. 

 

Steve Saunders:

Which are the previous inflections that you've been helping people with? 

 

Ahmed Guetari:

Well, the two recent ones I can think of are digital transformation, where F5 helped customers securing and delivering applications, enabling the business and their critical changes. The most recent one that is near to my heart is the 5G. We assisted our telco customers deploying, scaling the 5G native cloud environment. 

 

Steve Saunders:

Interesting. So, you're on a journey with these transitions, these transformations, and obviously, AI is perhaps the biggest one yet. What are you doing? 

 

Ahmed Guetari:

Well, think about, to simplify and probably bring us a close example, the AI workload is very similar to the 5G standalone workload but operates at a very much larger scale in term of data traffic. Since we successfully addressed that challenge, especially in the 5G Kubernetes environment, we are now evolving the solution to meet the demands of AI. 

 

Steve Saunders:

Does that mean that you are installed with Nvidia hardware with their silicon alongside it, or how does it work, or is it a software solution? 

 

Ahmed Guetari:

Our solution is software basic. It's called Big-IP Next for Kubernetes, and we are leveraging Nvidia DPU, or what they call data processing unit, which is a technology that allows to afloat the traffic from the servers and optimize the data center at large scale. So, we are putting our technology in Nvidia DPU. We're putting Big-IP Next for Kubernetes and Nvidia DPU. 

 

Steve Saunders:

What are the specific benefits that customers will see from this approach? 

 

Ahmed Guetari:

Well, great benefits. That's a very good question, Steve. What we're doing is by offloading the CPU from a heavy task, we are giving our customers three main values. The first one is simplifying the integration because Big-IP Next for Kubernetes streamlines the integration of the AI clusters into the existing data center, making the deployment simpler and smoother. The second one, we are enhancing the security and the multi-tenancy. Most of the traffic comes in as encrypted traffic, and what we do with offloading that decryption, we are improving the networking, and we are enabling the multi-tenancy at a very granular level. The last one, but not the least, we are boosting the performance by offloading all this network function, the DPU. So, you're freeing up the CPUs to just execute the AI workload and feed the GPUs in the best way. 

 

Steve Saunders:

How much performance benefit is there in percentage terms, for example? 

 

Ahmed Guetari:

If you look at the average data, the CPU is consuming between 20% to 30% of their capacity just to do this networking and security, not at scale. So, by offloading these capabilities, we are freeing up at least 20% to 30% of the CPU capacity in certain cases when you start doing decryption even more. 

 

Steve Saunders:

That is a significant economic benefit given how much these chips now cost. What about energy usage? Because AI is monstrous in order to do its work. Do you help with that as well? 

 

Ahmed Guetari:

Another great point, Steven. You are touching it because the DPU technology is using ARM processor and think about that DPU. You can get up to 200 gig of networking security load balancing jammed in 160 watts or less than 160 watts. If you need to do the same, you need a lot more watts. So, we are optimizing the energy consumption. 

 

The editorial staff had no role in this post's creation.