Enterprises are rapidly adopting generative and agentic AI applications to increase business efficiency and open up new opportunities. These models and applications are inherently distributed, exchanging massive amounts of sensitive data across data centers, branch offices, edge devices and the cloud. While this places new bandwidth and latency demands on networks, it also exposes organizations to security risks beyond what they are accustomed to.
AI security typically focuses on making models and applications more resilient, with the goal of maintaining a tight security posture in the data centers and clouds where they reside. But AI models and applications are consumed at the edge or branch locations, where they handle significant amounts of data. Given AI’s distributed nature, enterprises must also focus on securing this data outside the data center.
The question is, how?
AI’s Security Risks
The increasing reliance on AI applications exposes enterprises to a variety of new risks.
For instance, in the course of asking a generative AI application for help writing a report, an employee might disclose sensitive company or customer information. The data now becomes part of the AI’s model, where it could be disclosed to other parties. In this scenario, the ability to see, understand and control what users can and cannot do with AI applications in the enterprise becomes critical.
As another example, consider a clothing retailer’s virtual dressing room, powered by AI and computer vision. A customer tries on an outfit and then enters an immersive experience where they can see what they look like in different color combinations or with different accessories.
This experience requires an application at the retail store interacting and exchanging data with a large language model (LLM) hosted in a data center or in the cloud. And much of this data is personalized, because the LLM relies on information gathered from previous interactions with this customer to make tailored recommendations.
If a threat actor gains access to that application and LLM by exploiting vulnerabilities at the store, they could use it to exfiltrate all customer data from the LLM. This type of cyberattack is difficult to detect and prevent — especially if it lays dormant at the edge for a period of time before launching.
That’s an example of the security risks associated with an approved enterprise AI application. In addition, end users are increasingly turning to AI for help with their jobs — with or without employer approval. This trend, known as shadow AI, poses additional security challenges.
Additional AI security risks include:
- Command-and control attacks: The endpoint and edge devices that access AI applications are potential entry points for attacks. Once these devices are breached, threat actors move to gain access to poison the model, steal its data or launch further attacks.
- Data poisoning: AI is primarily acting on data, and that’s why data is front and center in so many AI security discussions. When an attacker compromises and manipulates the data being used to train or tune an LLM, it can lead to false outputs that negatively influence business outcomes — or worse, result in model or data exfiltration.
How to Mitigate AI Security Risks
To effectively combat these threats, enterprises need a multi-layered approach that includes proactive threat intelligence, strict access controls, network segmentation and robust policy enforcement. By implementing these strategies, organizations can protect their sensitive data and prevent unauthorized access to the AI applications and models driving innovation.
Threat Detection and Response
As the use cases for AI continually evolve and expand, so too does its attack surface. By adopting proactive threat detection and response technologies backed by extensive threat research, enterprises can stay up to date on all known risks and vulnerabilities — and how to quickly defend against them.
Zero-Trust Framework
When enterprises deploy their own AI models, there are certain users who need access to create and modify the model, and there are others who only need access to consume the model. It is imperative that users get the right level of access, but it’s not easy.
AI skills are in high demand, so most enterprises cast a wide geographic net in hiring for them. As a result, AI engineering and development teams typically aren’t concentrated in one location, and some members work remotely. So, providing secure creation and modification access isn’t as simple as saying, “allow everyone access if they are physically in this office.”
Zero Trust Network Access (ZTNA) does not provide blanket access to all applications. Instead, it verifies the credentials of each user, evaluates the risk they pose based on location, device and other factors, and then makes a determination regarding access.
Network Segmentation
Most network traffic in distributed AI environments consists of north-south interactions that happen between applications in branch, remote or campus locations, and models that reside in a data center or the cloud.
Segmenting the network reduces the attack surface. If a cyberthreat makes landfall on a guest segment, for example, it’s not able to move laterally and affect an AI segment. This approach prevents lateral movement and privilege escalation that can subsequently lead to data loss.
Policy Management and Enforcement
In the case of shadow AI, IT needs an effective way to ensure adherence to acceptable use policies while balancing employee productivity and infrastructure security. But that alone is not enough. IT also needs an easier way to manage these policies and respond to the new AI services routinely entering the enterprise. It becomes essential to configure policies centrally and enforce them closer to the users.
Securing AI with VeloCloud SASE
VeloCloud SASE, secured by Symantec enables enterprises to securely reap the benefits of AI and other emerging, distributed technologies. These three components combine to offer a better user experience and enhanced threat and data protection, reducing complexity and risk:
- VeloCloud SD-WAN: provides the reliability, visibility and control enterprises need to tackle the network challenges that AI applications pose.
- VeloCloud SD-Access: a path-optimized remote access solution that connects users to only the applications they need, using zero-trust network access.
- Symantec SSE for VeloCloud: provides security enforcement for generative AI traffic on an optimal path between users and these applications.
Let’s look at some of VeloCloud SASE’s most important features:
Threat Intelligence
VeloCloud SASE relies on Symantec’s global intelligence network, which gathers and analyzes over 1 billion threat signals daily from all major entry points, including endpoints, email and the internet. Using AI and ML, Symantec threat researchers focus on emerging threats or zero-day attacks that arise in a rapidly changing AI landscape.
For example, let’s say there’s a malicious website that can launch a command-and-control attack for data exfiltration. Symantec SSE can detect and block any activity associated with this site, according to policy that’s based on Symantec threat research analysis. The solution uses URL filtering to enable enterprises adhere to acceptable use policies. Symantec SSE also offers a choice of browser isolation to create an air gap between high-risk sites and users.
The solution helps enterprises rein in shadow AI activities by providing visibility and control to prevent data exfiltration using CASB and DLP.
Enhanced Firewall
VeloCloud SD-WAN Enhanced Firewall Service (EFS) natively integrates with VeloCloud SD-WAN to enable edge or branch security for AI applications without deploying additional services. Network administrators can configure policies on the centralized VeloCloud Orchestrator and deploy them on the VeloCloud appliances at each branch location.
EFS protects AI application usage at the branch with a comprehensive set of capabilities, including URL filtering, malicious IP filtering, intrusion detection and intrusion prevention. Doing so eliminates the burden of having to send all traffic to firewalls in the data center, offering enhanced protection for branch-to-branch, branch-to-hub and branch-to-internet traffic patterns.
VeloCloud SASE offers distributed enforcement at the branch and on an optimal path in the cloud to align with the traffic patterns and performance needs of distributed AI.
Zero Trust Network Access
By applying the principles of zero trust security to network access, VeloCloud SD-Access offers path-optimized connectivity for remote users, extending the security of VeloCloud SD-WAN to any location. For example, if someone uses a managed endpoint device such as a company-issued laptop to access an AI application, then their access will be governed by a different set of policies compared to a user on an unmanaged device such as their personal smartphone — because they pose different levels of risk.
A Unified Solution
AI introduces new network security challenges, from data exfiltration and unauthorized access to compliance risks. VeloCloud SASE, secured by Symantec addresses these concerns by unifying threat intelligence, zero-trust security, network segmentation and policy enforcement in an automated, software-defined platform. It prevents data loss, enforces compliance and ensures the integrity of AI applications and models, so enterprises can confidently adopt this transformative technology.