Chip designers are looking for ways to make bank off the move to open cloud native 5G. One way is through Layer 1 accelerator silicon, which boosts the performance of virtualized radio access network (V-RAN) systems.
Cloud-RAN, V-RAN, and open RAN can run 5G networks on off-the-shelf servers. This is good for operators who, for years, have been tied to the RAN vendor mafia and forced to use specialist RAN hardware and cellular silicon. There is, however, a huge gap between networks using specialist RAN devices, compared with the up and coming V-RAN and open RAN systems that generally use white box x86 servers.
Silicon vendors, therefore, like Marvell have developed inline ASIC accelerators to speed up the performance of cloud RAN. Dell, Nokia and Samsung are all using or trialing Marvell’s 5 nanometer Octeon 10 L1 accelerators, as announced at Mobile World Congress this year.
“We view the first step as the L1,” Joel Brand, senior director of product marketing at Marvell, told Silverlinings. “Before we came in with the inline accelerator, the industry was heading down this path of run it on general-purpose compute...I think the industry is past that, I think the industry understands that L1 needs accelerating.”
The Marvell Universe
Of course this benefits Marvell, as well as others that favor network acceleration ASICs, such as Qualcomm. Intel is the king of the general-purpose x86 chip that has been used in every early V-RAN or open RAN deployment. Interestingly, Intel is now offering a V-RAN boost in its 4th generation Xeon processors.
Brand noted that Marvell, however, isn’t finished in its stack acceleration campaign. “I think that the next step is the realization that the other layers of the stack need acceleration,” Brand said. They need not just an L1 performance boost, but a network processor that also accelerates Layer 2 and Layer 2, Brand stated.
Marvell, he claimed, has one of best and, indeed, one of only network processors built to accelerate Layer 2 and Layer 3 V-RAN traffic.
“When we want to build a cloud-native [network],” Brand stated. “Cloud native doesn’t mean you want everything on general-purpose compute. Cloud native means run application layer stuff on general-purpose compute, when you don’t know your workload, when you don’t exactly what is required.”
“When you know what is required, you’re running it on accelerators,” Brand concluded.