The vulnerability of the ServiceNow AI platform earlier this year reflects a broader shift taking place in enterprise cyber risk. There was no evidence of exploitation before it was fixed, but the incident is a warning to cyber security experts.
Weaknesses in the agency’s AI capabilities could allow user impersonation and workflow manipulation to occur, showing how modern security threats are evolving beyond traditional data breaches.
The article continues below
Cyber ​​Security Lead Writer, Pluralsight.
For businesses operating throughout the supply chain, the risk of unregulated AI agents can increase significantly. Without proper oversight, independent agents can create disruptions that occur in many organizations.
As agent AI adoption increases and becomes embedded in enterprise software, cybersecurity is no longer just about protecting data; it is about controlling the systems that can work on behalf of the organization. Organizations must move beyond a cybersecurity model that focuses solely on stopping breaches, and instead focus on how to maintain operational control when automated systems are operating beyond their intended scope.
The changing cybersecurity model
For most of the past two decades, the cybersecurity model has been built around a clear perimeter. Cyber ​​teams will typically be controlling and preventing compromise at individual server points, where obvious, identifiable failures can be isolated and contained. The rise of agent AI has changed their focus.
With AI embedded in core business platforms, organizations need not worry about negative feedback or accuracy of output. The next big change is from ‘AI content risk’ to ‘AI action risk’. When AI agents interact across identities, APIs, platforms and workflows, they introduce new risk factors, and unlike static data breaches, these can spread across multiple systems before anyone notices.
The key question is what AI agents are authorized to do: how to initiate workflows, perform tasks and operate within delegated permissions. If an agent is misconfigured, exploited or given too many privileges, the consequences can escalate quickly, because these systems automatically make decisions on multiple workflows at the same time.
The question is no longer just “have we been breached?” but “are our systems still doing what we authorized them to do?” Those are different problems, and they require different controls.
Maintaining operational control
In test cases, researchers have shown that unauthorized external attackers who only need a target’s email address can embed malicious instructions into data fields that will later be processed by super-privileged user AI agents. If left unchecked, organizations can expect to see the execution of unauthorized workflows, the expansion of cross-platform access and the rapid spread of errors or malicious actions.
In fact, a common security flaw has far more significant consequences if it resides within a platform that can be applied to the entire workflow – often described as having an impact.
A reported security flaw that allows impersonation and arbitrary actions within privileges is exactly the type of failure mode that leaders should be concerned about in AI-enabled workflow systems. That’s why knowing how to maintain operational control when automated systems behave unexpectedly is important.
For cybersecurity teams, this means treating aspects of AI as changes in an organization’s regulatory environment. Organizations should review permissions, testing methods, monitoring and rollback methods for all AI uses. Punitive identity management, least privileged access design and strong rights management are essential.
This requires a change in the way organizations manage risk. Rather than focusing on supplier evaluation, leaders should prioritize integration governance – prioritizing a small number of platforms that can trigger tangible business actions. It also includes controlling the seams: mapping key combinations, data flows and customizations, while monitoring unusual behavior and enforcing administrative and service account privileges.
Executive feedback when AI-enabled workflows are implemented will become increasingly important as the link between cyber and AI becomes stronger. Set clear escalation expectations that include quick disclosure, clear cutbacks and vetted channels for vendor referrals. Uptime is a key security capability for AI-controlled systems.
The cyber skills gap
Cybersecurity was identified as one of the top skills gaps in our Technology Skills report, and 95% of IT and business professionals say they don’t have enough support to build skills. Clearly, organizations must invest in the ability to manage AI-enabled systems effectively.
If AI agents are to be added to an existing product, cybersecurity should be high on the agenda in the planning phase. That includes ensuring that AI agents are closely monitored in terms of their rights and risks are mapped out if something goes wrong. There is also a need to invest in technological capabilities to design, monitor and rapidly automate AI-driven containment.
But this requires skilled professionals whose skills are up-to-date on the latest AI cyber threats. Currently, the knowledge gap in many organizations makes it difficult for security professionals to protect against AI-driven threats – let alone know what to do if something goes wrong. Those organizations that get it right will see a wealth of new learning on how security and privacy in AI work together.
Equally important is practice. Being able to measure readiness through sandbox testing will ensure that decision making is implemented and recovery times are widely understood. The exercise should also involve senior teams, legal and forensic, ready to deal with threats and communicate quickly with vendors.
Which leadership should prioritize
As organizations accelerate the adoption of AI agents, leaders need to redefine risk. That means treating unauthorized actions, workflow exploitation and operational disruptions as crisis situations worthy of the same rigor of exercises used in ransomware or major outages. It’s a responsibility that goes beyond having a cybersecurity team. It’s a responsibility that goes beyond having a cybersecurity team.
The questions every leadership team should already have answers to are: Who can represent us? What is a toggle switch? What is our first hour catch-up move? Organizations that have practiced those responses, across cyber, legal, comms and executive teams, are the ones who keep critical systems running when something goes wrong.
Check out our list of the best Firewalls: reviewed, rated, and rated.



