Telcos rave about optical AI opportunities, but will it last?

  • Telcos race to expand optical networks for AI, but analyst warns of short-term opportunities
  • AI workloads are poised to go beyond data centers, with the edge as the next frontier
  • Enterprises may be reluctant to enlist telcos for AI edge tasks

The age of AI is upon us, and telcos want to get in on the action by expanding their optical networks for hyperscalers. Demand for data center interconnect (DCI) is indeed massive, but it could also be a short-lived opportunity, said research firm STL Partners.

This may come as a shocker, as operators like Lumen, Logix, Zayo and others double down on building more long-haul fiber routes specifically catered for data centers and their AI needs.

“We’re not disputing that demand exists,” David Martin, senior analyst and telco cloud lead at STL Partners, told Fierce. The question is how long it will last as generative AI continues to advance at a rapid pace.

GenAI is poised to move “more and more towards the edge,” he explained, as will AI inferencing that’s “non-latency critical.” Meaning large data centers won’t be the main hotspots for AI activity.

To be clear, Martin isn’t saying the optical networking AI opportunity will disappear anytime soon, nor that telcos should stop investing in it. But they shouldn’t put all their eggs in one basket.

Looking at the next 2-3 years, this opportunity “may be a lot less than people think it’s going to be,” he added, because we’ll be past the initial stage of developing AI models.

“There won’t be the need for these vast concentrations of model building capacity in interconnected data centers in the same way as there is now,” said Martin.

Telcos and the AI edge: An uphill battle

IDC VP Dave McCarthy has similarly noted success in AI will require going beyond the hyperscale data center, as the interconnection to private data centers and edge locations “will be necessary to deliver on AI inference in a performant and secure manner.”

Potential in AI and the edge even lies in fiber-to-the-premises deployments. The future could see more latency-sensitive and large-throughput AI workloads being run over FTTP networks, according to AvidThink Principal Roy Chua.

But Martin is skeptical that operators will be prepared to run AI workloads anytime soon.

It’s not “an easy evolution from a pure connectivity play to doing something like that,” he said. “They can do the connectivity bit that connects those bits but that’s not what we’re talking about here.”

Not only do telcos need plenty of specialist skills and updated infrastructure to successfully run AI workloads, there’s no guarantee they’ll get enough demand from enterprises. Companies may prefer keeping their workloads and data on premises rather than on telecom infrastructure, which has more than its fair share of security risks.

“It could be another version of the hype there was around multi-edge access computing a few years ago [with] telcos sort of thinking this is an area that they could move into profitably without having to invest too much or to upskill too much,” Martin said. “And I think we tend to disagree with that.”

The data that flows through telecom networks also has a fundamentally different structure from edge computing data. The former involves more “high data plane processing throughputs,” whereas edge compute workloads don’t move across the network in the same way, he noted.

“It’s a different business and you need different rack configurations and different cooling and all sorts of things,” Martin concluded. “But it’s just not the core telco business.”