The pace of cloud innovation is accelerating, but cloud network architects still are wrestling with a problem that has dogged the communications industry for 50 years — interoperability.
“People are looking for a cloud [infrastructure] Rosetta Stone,” Jon Mischel, Juniper Networks' director of marketing for service provider telco cloud solutions, told Silverlinings in a conversation in December.
Currently, clouds from different service providers don’t readily talk to each other, which makes it hard to run applications and services between them. Without the ability to connect clouds from different providers in a turnkey way, enterprises run the risk of getting locked into a public cloud service from a single supplier. Today, that’s typically Amazon Web Services (AWS), Microsoft Azure or Google Cloud.
The fear of lock-in has pervaded the communications industry for half a century, since the days of the IBM mainframe, when network architects wore white lab coats, and the computers were so heavy that they had their own gravity well.
However, not everyone thinks lock-in is big problem today.
“There’s an awful lot of fear-mongering going on right now about the new lock-ins,” says Danielle “DR” Royston, telco public cloud evangelist.
And yet there is currently no one-size-fits-all for cloud services, which is why an increasing number of companies would prefer to deploy a hybrid or multi-cloud strategy – combining public with private clouds – or specialized cloud services tailored for industry verticals like healthcare or finance.
A fiendish plot?
Diversifying providers is a best practice that transcends the world of cloud to pretty much any customer-supplier relationship in any industry. By enabling competition, it gives suppliers a reason to keep improving their products, while preventing them from price gouging. In the world of communications networks, it also improves reliability by providing redundancy. If one network fails, traffic can travel over the other.
So, are the cloud providers engaged in a fiendish plot to lock gullible enterprises into their services? As with everything else in cloud infrastructure, everything is complicated, and nothing is at it might seem at first sight.
For one thing, there are no good guys and bad guys in cloud interoperability. Which is a pity. As a journalist, it would be deeply satisfying if we had a couple of mustachio-twirling villains of the “mwah-hah-hah!” variety to write about (Ed. note: surely Larry Ellison?).
However, the reality is that the hyperscalers are simply focused on moving at hyper-speed through an ever-expanding and chaotic market while doing their best for their shareholders and customers.
Take the example of AWS’ relationship with Aviatrix, a private company that is making hay with its Aviatrix Transit Gateway Orchestrator, which helps enterprise companies build secure multi-cloud networks over an AWS cloud core.
AWS promotes Aviatrix's solution on its own web site via the AWS Marketplace. This is hardly the behavior of a wannabe monopolist.
If AWS’ ambitions lay in becoming a cloud despot, the simplest thing for it to do – and standard modus operandi in the communications industry – would be for it to ignore Aviatrix completely and then quickly rush out its own solution that does the same thing. Or, it could buy Aviatrix, which could still happen, obviously.
About the biggest dis I can think of for the hyperscalers is that they haven’t embraced industry efforts to define interoperability standards for cloud infrastructure. Standards are lock-in’s kryptonite. But that’s not surprising or unreasonable, given that the proprietary status quo is working entirely in their favor, and there isn’t consensus over what those standards should look like or who should oversee creating them.
In the absence of an agreed-upon cloud standards authority, quite a lot of industry organizations have had a go at developing standards themselves. This isn’t always helpful.
Last month, the industry association MEF issued a press release about building secure cloud services titled “MEF Introduces First SASE Standard.” But the MEF isn’t a standards body, so it can’t make standards. Oops.
Also, MEF stands for Metro Ethernet Forum, or used to. The MEF has now changed its official name to “MEF Forum," which makes it the Metro Ethernet Forum Forum, which is, well, fantastic.
As its names suggest, in the first decade of the 21st century, the MEF did good work in the metro Ethernet market. But other than a desire to maintain its relevance and continue to charge membership fees, I don’t see how that qualifies it to start whacking on the piñata of cloud infrastructure standards. (Metro Ethernet Forum Forum did not respond to a request for comment).
In the 20th century, the responsibility for developing communications standards was clear. The Institute of Electrical and Electronics Engineers (IEEE) handled the in-building standards and still does. The Internet Engineering Task Force (IETF) and International Telecommuncations Union (ITU) managed the standards for everything outside the building (a.k.a. the wide area).
But when cloud came along, things got a lot more complicated. Building a cloud network requires a very particular set of skills, which transcend traditional network layers to cover not just networking and virtualization (layer 3 of the ISO stack) but also cloud applications and services (layers 6 and 7, where the apps and the services roam, often grazing on open-source code).
A peaceable kingdom
So, how likely is it that Mischel’s wish for a cloud Rosetta Stone will materialize to convert today’s mismatched cloud chaos into a peaceable kingdom of interoperability?
One way it could happen is if a big network standards body got together with a big open-source industry group and co-developed the necessary standardized APIs to unify network and software worlds. Given the cultural differences between those two demographics, that may be a long shot (think Brogues versus Birkenstocks).
Or a grass-roots effort from the right mix of companies with a compelling de facto standard could take off, forcing the hyperscalers to adopt it.
Or none of these things could happen, and we could continue floundering around in the current cloud miasma.
That would still be better than the two worst-case scenarios, where either one of the hyperscalers — hello AWS — achieves overwhelming market share and ends up as a de facto cloud demigod, or the U.S. government puts on its giant floppy regulatory clown shoes and has a hilariously inept crack at preventing the lock-in problem by legislating what cloud companies can and can’t do.
Over the decades, fear of vendor lock-in has become ingrained in the molecular DNA of all network architects for some very good reasons. Cloud network architects should continue to be especially wary of the risks. Remember: They used to say that no one ever got fired for buying IBM. Until the personal computer came along and they did.
Do you disagree? Send Silverlinings’ editors a letter here. We may publish it. Or not. We'll see how we feel.