Wireless

Vultr's Kevin Cochrane discusses AI security, compliance, and global GPU deployment strategy

In this interview between Fierce Network’s Stephen Saunders, MBE, and Kevin Cochrane, Chief Marketing Officer at Vultr, the discussion centers around Vultr's role in the AI and cloud computing market. Cochrane outlines Vultr as the world's largest privately held independent cloud compute platform, emphasizing its extensive global data center operations and services, including GPU as a service. Vultr's early adoption of GPU technology aims to support enterprises in deploying AI-enabled applications with strong emphasis on security and compliance, addressing issues like data residency and proprietary data protection.

The conversation touches on the challenges enterprises face with data security, especially in regulated industries like pharmaceuticals and financial services. Cochrane highlights the importance of trust and proper governance in AI deployment, contrasting it with the traditional tech mantra of "move fast and break things."

Saunders raises concerns about the dominance of hyperscalers like Amazon, Google, and Microsoft, noting the emerging trend of enterprises seeking alternative providers for GPU services due to supply constraints and cost issues. Cochrane points out Vultr's unique position in the market, providing enterprise-grade security and compliance while being accessible to individual developers and small businesses.

The interview also addresses the role of OpenAI in the AI landscape. Cochrane acknowledges OpenAI's influence in catalyzing interest in AI but believes enterprises will prefer custom-built models for their specific needs. He emphasizes Vultr's commitment to an open ecosystem and flexibility in supporting diverse customer requirements, including potential future alternatives to NVIDIA GPUs.

Saunders appreciates Vultr's approach to ensuring security and compliance in AI applications, suggesting a new tagline for Vultr, "It's in the Vultr vault," highlighting the company's focus on data protection. Cochrane expresses interest in this concept and discusses Vultr's survey on AI infrastructure, promising to share insights that could benefit the industry.

The interview concludes with mutual appreciation and Cochrane's commitment to continuing their dialogue and sharing valuable industry insights.


Steve Saunders:

Hey, Kevin. How are you?
 

Kevin Cochrane:

Hi, how are you? Good to speak with you again.
 

Steve Saunders:

Yeah, it's great to talk to Vultr again as well. You guys have really tying the market, haven't you? Because it's all AI and GenAI at the moment, and I know that that is a big part of your focus. Before we dig into other questions, can you just let our audience know what Vultr's role is in the transformative effect that AI is having on our market at the moment?
 

Kevin Cochrane:

Certainly. And once again, wonderful to speak with you. Vultr is the world's largest privately held independent cloud compute platform operating 32 global data center regions across all six continents. We offer the full range of core cloud compute, bare metal, and GPU as a service, with all of the core supporting platform services that enterprises and developers need from managed Kubernetes, MENA's database, all of your networking, all of your storage, and an entire ecosystem of third-party partners and integrations as well.

Specifically what we here at Vultr have been doing is we were an early pioneer of bringing GPU as a service to market two and a half years ago. And having been in the cloud compute space since 2014, supporting some of the most demanding global workloads for enterprises, building cloud-native applications on worldwide Kubernetes infrastructure, we wanted to bring the same principles of cloud-native engineering to AI.

Which means that we wanted to support enterprises to be able to spin up large GPU clusters, both for training and global production inferences all around the world. And while doing so making certain we kept two things in mind.

Number one, making certain that we can help them deploy AI-enabled applications, spinning up not only GPU resources, but all of the supporting CPU resources to power their next generation application experiences.

And number two, making certain that we had iron-clad safety, security and compliance to make certain that as people are building and deploying new AI-enabled applications that they can build trust.
 

Steve Saunders:

Is that related to the sovereign cloud trend that we hear a lot about?
 

Kevin Cochrane:

Correct. Absolutely. So in the early days of the AI revolution that we're all now a part of, a lot of well-funded, venture-backed startups, and all of the associated GPU infrastructure to enable them was stood up primarily here in North America.

Now, there's a couple of challenges with that. Number one is if you are an enterprise and you're looking to build core competitive advantage by taking all of your unique operational data, and all of your unique enterprise knowledge and marshal it to deliver faster, better services and a better experience for your customer using AI or generative AI, you need to make certain that your data is safe, secure, and protected.

And oftentimes you need to make certain that your data is resident in the region in which you're operating, particularly for data that is confidential or very, very sensitive. If you're, especially in a highly regulated industry like financial services or healthcare, as two examples.
 

Steve Saunders:

Pharmaceutical must be a big one, mustn't it?
 

Kevin Cochrane:

Pharmaceuticals, exactly. Taking critical patient data, taking critical consumer financial data and actually making it leave geographic boundaries to feed a model that might be in San Francisco or Seattle doesn't work if you're a citizen of the EU, or if you're a citizen of the UK, or a citizen in many other countries around the globe.

So data residency truly matters and the safety and security of people's private confidential data matters when you're training these models as well as all of your confidential proprietary company information as well. That's one aspect, the data residency.

The other aspect is you also want to make certain that the data that's being used is being used only for your purpose, only for your explicit needs. You want to make certain that the data that you're feeding into the model isn't also training the model for the benefit of others as well.

And this is a particular concern for enterprises around the globe that again, are looking to build core competitive advantage in AI. If they're using their proprietary confidential information to build new AI models, they don't want that information being used to support others at the end of the day, nor should they.

And then the last part is the sovereign aspect of it, which is AI is critical for the economic future of countries around the planet. And for every nation on the globe, they need to be able to maintain their own competitive advantage economically and ensure their own societal welfare by making certain that they have sufficient resources to build and sustain a vibrant AI ecosystem.
 

Steve Saunders:

I think that goes to the question of maturity and stability amongst large enterprises who are looking for AI services and partnerships. Because a lot of the companies that I now see entering this market are sort of Johnny-Come-Lately's, and are startups. Obviously, the VC community just chases either its tail or the money, whichever is nearest, and that I think is concerning.

Do you have any advice for companies when they're trying to assess what scale of company it's safe to work with when so much is at risk? And also if you can talk to Vultr's stability, obviously you've got in early, I think that would be helpful.
 

Kevin Cochrane:

It's a really important issue that you bring up here. All too often in tech, our attitude is move fast and break things. We prioritize innovation over safety, security, and trust. And in this age of AI, we need to take a different approach. Building trust and building a proper foundation is critical for the long-term success of AI in terms of improving outcomes for societies and for companies rather than engendering new risks that we can't even foresee at this point in time.

So the important thing here is to make certain that all of the principles that we've learned from the past 10 years of building and deploying in the cloud, all of the past 10 years of learning, of building and deploying cloud native applications, all of the learnings over the past 10 years for implementing new data governance frameworks through the office of the CISO to ensure we're compliant and not misusing proprietary, confidential and private information that we have that all in mind before we start deploying new AI models and AI enabled applications at global scale.

It's not about moving fast and breaking things. It's about moving fast and getting it right. What this means is for companies like Vultr with a ten-year operating history, a 10-year investment in ensuring that we have all of the compliance certifications, all of the ISO certifications available in our 32 worldwide data centers, security compliance truly does matter.

So in this new day and age, we like to support core cloud operations and platform engineering teams that are working with the office of the CISO to put in place governance frameworks around the use of data to train and tune models to ensure that the data that's being marshaled, who's accessing, how it's getting processed, where it's being stored, and what it's being used for, all of those questions are defined up front.

So at the end of the day, when you're building that model and looking to deploy it to global production inference, you can answer the basic questions about what data was it trained on and what are you using that data for and how is it benefiting the people who consented to have their data used for that very purpose? It's critical that we get this right.
 

Steve Saunders:

I couldn't agree with you more.
 

Kevin Cochrane:

Yeah. And earlier at UpVox, we didn't get it right. We just built web applications, collected a bunch of data, threw it to ad servers, and frankly misused a lot of that data. And over the past several years since the dawn of GDPR, we've been working to clean that up and fixed sins of the past. If we don't get this right with AI, some of the sins might not be so easy to clean up.
 

Steve Saunders:

No, we're going to stitch them into the DNA, the fabric of not just businesses, but societies and that's high stakes territory. Who's in charge in the customer in the enterprise and the large enterprise. You mentioned CSO chief security officer a few times. Is it CSO... I usually think of the CSO as he or she being, "First do no harm, guys. Don't let the cat out of the bag."

But is it the chief technology officer who is overall in charge of the actual strategy, or is it coming from higher up? Is it actually just being led by the board or the CEO mentioned director? What do you see in your extensive experience with this?
 

Kevin Cochrane:

So it's all of the above. So first and foremost, boards are requiring organizations outline vision and strategy for leveraging AI to improve operational efficiency, deliver new services to their employees and to their customers, and to increase their differentiation in the market to drive outsize revenue growth and margin expansion. That is a board level mandate. As a result, CEOs are tasked with getting their C-suite to align around a strategic plan for the proper use of AI to meet the board's mandate.

Now, the three primary actors that we work with here at Vultr that we think are the key to long-term AI success is the CIO, the CTO, and the CISO, the Chief Information Security Officer. The CTO is a person who's got to lead the charge in terms of implement, clarifying the vision, putting in place the strategy, and starting to build and invest in the operational plans to build the next set of AI enabled application.

Their partner is the CIO who's making certain that all of the core supporting infrastructure to enable and operationalize those new applications at global scale is put in place. They're the ones that are making certain that the financial outcomes that the business needs to achieve are actually going to be realized.

And the third person that a lot of people don't think about, to your point, it's the person that is there to do no harm. It's the CISO. At the end of the day, this is so critical to get right. We can't just move fast and break things and security be an afterthought. Security's always an afterthought. It can't be an afterthought. The CISO needs to have a seat at the table and needs to be upfront in thinking through the ramifications of the proper use of data and the proper rollout of AI models that can be trusted and have a good governance framework so we can ensure that they have minimal bias and that they are explainable and that we can verify the predictions that they're making.
 

Steve Saunders:

One of the questions which I have is about the dynamic in the market right now between the hyperscaler incumbents, let's call them, even though they haven't been incumbent for very long, the three giant hyperscalers, if you're in China, you can add Huawei to that list as well, but they really are dominant forces. That's not very good for the industry actually, and for the customer base, particularly when it starts to look like one of them might become monopolistic. We read about, we hear about private cloud operators, independent cloud operators. Is that a big trend now and why should people be looking at them?
 

Kevin Cochrane:

It's a huge trend. So as you mentioned, in the world of cloud compute, you largely speaking had a bifurcation on the market. You had the big three hyperscalers, Amazon, Azure, Google, that were fit for purpose for large businesses that were putting their workloads in the cloud.

Then you had a second group of people that were developer clouds that were targeting small to medium business or individual people looking to host their WordPress website. And there was really nothing much in between other than one very unique company, which was Vultr. Which was purpose-built to support the security compliance and demanding workloads of the enterprise, but also had the same appeal to individual developers and small to medium business.

But the challenge was for people that were appealing across the broad spectrum, all the way from individual developers all the way up to the largest enterprises and government institutions on the planet was in the enterprise, the safety and security factor of going with a big, known name like an Amazon, Google or an AWS, could not be beat.

No one that ever got fired for buying IBM as the old saying goes, right? So what's happened now in the GPU market is because the hyperscalers have been slow to be able to provision NVIDIA's, latest GPUs and to provision them at global scale at a price point that can actually work for normal businesses, enterprises have started to turn to independent's as an alternative to be able to get access to the GPU inventory that they need to run their in-house R&D experiments where they're testing out and training new models.

This is where you saw a lot of prominence to companies like us here at Vultr as well as others in the GPU as a service space. Now, the key here is enterprises are correct to start thinking that there may be alternative providers that can fit their multi-cloud strategy to support some of their enterprise workloads, meeting their performance requirements, meeting their security compliance requirements, but doing so at a lower cost.

But what they also need to remember is that compliance and security really, truly do matter. And so as enterprises are moving beyond that testing phase and running POCs just to explore the art of the possible, as they're starting to think through the actual production deployment and operationalization of new AI pipelines, the CIO, the CTO, and the CISO are thinking twice about newcomer GPU as a service providers that don't have the ISO certifications, don't have the ten-year operating history, and frankly don't have the global reach that they need in order to support their team.

So we here at Vultr having had a ten-year operating history, supporting the most demanding workloads as a client, we bring all that expertise to the GPU as a service space to help organizations not just do internal R&D training of models as a part of an experiment, but to actually do mass production scale globally for new initiatives.
 

Steve Saunders:

What's your take on OpenAI because obviously it's being talked about a lot? Do you think it's a good thing, it's good influence on the industry, not so good, and does it just take over everything, because that wouldn't be good?
 

Kevin Cochrane:

So I have multiple thoughts on that. Obviously, OpenAI was critical to wake everyone up that years of talking about AI, now we're at a point where AI can be used in the enterprise for a variety of both internal and external use cases. So it catalyzed a moment in time where people said, "My goodness, let's think about this differently. What if this were now possible? What could we potentially do?"

And that was really critical for that moment to happen because now we're at the dawn of a new ten-year cycle where basically people are rethinking their entire customer experience, their entire application stack. It's a magnificent time where everything is now possible again. So kudos to OpenAI for making that happen.

But to the second, to your point is, is OpenAI going to eat the world? Is every single application both deployed internally and externally going to be leveraging OpenAI? And the answer is, I just don't believe that's going to happen. We just ran a worldwide survey of 1000 C-level decision makers in the AI infrastructure space. And at the end of the day, the vast majority of AI they're going to deploy is going to be custom-built models, or just leveraging commercial off the shelf open source models that are lighter weight and more fit for purpose.

At the end of the day, organizations have proprietary confidential information that they need to keep safe and protected, and they're not going to want to basically make that data available to a centralized service where that data might be used in other ways.
 

Steve Saunders:

What's your tagline for your company? Do you have a legend?
 

Kevin Cochrane:

Tagline for the company? A legend?
 

Steve Saunders:

Yeah, like a catchphrase.
 

Kevin Cochrane:

A catchphrase, any GPU, any scale, any location.
 

Steve Saunders:

It should be, it's in the vault, the Vultr vault. Because that's what I really take away from our conversations is you're going to get this right, but first you're not going to get it wrong. And I think that's incredibly important. I really do. I give you a lot of kudos to that, but your one's fine. Your one's okay as well.
 

Kevin Cochrane:

No, I love that concept because if you think about a vault, it's all of your treasured assets that you want to keep safe, your data, it needs to be safe, it needs to be protected, it's a store of value, and it's all in the vault. I love that concept. So I might have to give you credit because that might become our new tagline.
 

Steve Saunders:

Well, it goes on a T-shirt, it'll fit on a button. But actually, I really think that that's key, one of the things... By the way, I'd love to see that survey which you did, if you decide to make it public, please share it with us. That is absolutely what people really need to know right now is what are people planning, and that's vital information for our audience. And I give you a lot of credit for putting that together.

But I think people are worried inside large enterprise organizations, also public sector and vertical industry, of course. They're worried that AI is going to have a life of its own. They're worried that they don't know where the data is going to go and whether it's going to reappear on the other side of an application or in somebody else's application and all of that stuff.

They're probably right to be worried because, of course, at the moment, we don't hear about the problems. We don't hear about... Enterprise companies aren't going to publicly disclose that they had a huge problem with AI. The biggest problem recently was IBM using it. It was an IBM AI agent, which McDonald's used, and it got all the orders wrong. It's at another end of the AI spectrum, but that's very unusual to hear about things like that.

So I think anything which you can do to provide people with surety, with intelligence that this is going to be a safe journey is really critical.

I have one other question which I have to ask you. The GPU shortage, shortfall, you got in really early, so you have not suffered from that as much as others, but at the same time, there is still a shortage. Do you see the potential for using silicon from other companies apart from NVIDIA in your applications, in your services?
 

Kevin Cochrane:

Well, I think the most important thing here, as with everything in tech, the consumer and the customer is best served when you have a vibrant open ecosystem. And here at Vultr, we love to support a vibrant, open ecosystem and give customers freedom, choice, and flexibility.

On the corporate cloud compute side, this is why we're strategic partners with both Intel and AMD, and this is why we also invest so heavily in our partner ecosystem and our marketplace to unlock an entire world of containerized AI models, containerized business services that people can just mix and match and easily marshal and deploy to accelerate new AI workloads or cloud native workloads.

So today, people are requesting NVIDIA's, top of the line GPUs. That's where the market is. But if there is a point in time where customer demand is looking for alternative silicon, our goal is to meet the needs of our customers and to give them the greatest diversity and give them the best options that are available to help them meet their needs.
 

Steve Saunders:

And that's in your DNA, I know that.
 

Kevin Cochrane:

It's in our DNA. Correct.
 

Steve Saunders:

This has been great, Kevin. I always enjoy talking to you. It's refreshing and I always learn a lot. Thank you so much. And I hope you'll come back and join us again. And again, please send us the survey because we'd love to publish anything that you are happy for us to share with that. That's gold dust.
 

Kevin Cochrane:

Well, wonderful. I'll be happy to share that with you, and always enjoy speaking with you, and I always get so much out of it, including now I have a new tagline, which I love.
 

Steve Saunders:

Yeah, it's in the vault. It's in the Vultr vault. I love it. But I expect to see that in wide usage. It works really well. But I appreciate you, brother. Thank you so much, and congratulations on everything which you're doing there. You're having a great success. It's inspiring. Thank you.
 

Kevin Cochrane:

Thank you so much. Take care.
 

Steve Saunders:

Bye.

The editorial staff had no role in this post's creation.