The launch of NIST’s AI Agent Standards Initiative marks an important moment in the development of enterprise AI. For the first time, one of the world’s most influential standards organizations is officially acknowledging what security groups have been seeing on the ground for some time now.
Director of Cybersecurity Strategy at Salt Security.
Standardization is more than helpful; at this stage, it is important.
The article continues below
AI agents work on what can be described as an Agentic Action Layer, or an interface where models connect to APIs to retrieve data, trigger workflows and interact with other systems. This is where thinking turns into execution. And doing, in business environments, means API calls.
Why is it important now to measure
Historically, cybersecurity has evolved in tandem with architectural shifts. Endpoint security has evolved from the background of the personal computer. Network security grew with business connections. Cloud security became critical as workloads were moved to SaaS and IaaS environments.
Today, AI agents and API-first architectures represent the same inflection point. APIs now power most digital interactions and underpin all intelligent AI-driven workflows. Yet many organizations still can’t confidently answer basic questions about their API exposure, shadow repositories or runtime protection.
The NIST initiative reflects the recognition that AI agents present a unique risk profile. Unlike passive systems, agents can think, perform actions and work at machine speed. It’s more than just accessing data; they can change settings, move funds, update records and trigger stream automation.
Without standards around ownership, logging, governance and secure integration, the result is very chaotic and fragmented and full of blind spots leading to serious data breaches.
Common foundations will help marketers align terms, controls and testing methods. More importantly, they will help CISOs frame security as a structural problem.
What organizations should do now
Importantly, standards alone will not close the gap. Businesses adopting agent AI must work in tandem.
First, they should get full visibility of their API fabric. Our research consistently shows that organizations are trivializing their API lists, leaving undocumented or “shadow” APIs exposed. If an AI agent can summon it, it must be detected, isolated and controlled.
Second, identity and origin must be the cornerstone when it comes to impersonal things. Without clear machine ownership, “agent behavior” is indistinguishable from guaranteed abuse.
In a world where 96% of successful attacks involve abusing legitimate access, giving a standalone system broad read/write permissions without a robust least-privilege design is a structural risk.
Third, governance must go beyond consistent policy. Agents generate high volume machine-to-machine traffic that common endpoint and network tools cannot translate to the business logic layer. Organizations need to monitor behaviors that understand the sequence of API calls, data sensitivity and intent, not just packets and ports.
Finally, secure design should be part of the agent development lifecycle. Marketing “autonomy” without consistent logging, uptime verification and policy maintenance is nothing new. Exposure.
Is the horse already tied?
It is fair to ask whether the standard comes too late. AI agents are already being deployed in customer support, software development, IT operations and personal productivity tools. In some cases, as we’ve seen with early agent platforms, enthusiasm has outstripped infrastructure fundamentals.
But this is not a lost cause. The active rule window is still open.
Unlike previous technology waves, organizations now understand the cost of restoring security. The problems of cloud misconfiguration and supply chain degradation have provided hard lessons. The difference with agent AI is speed. Independence measures risk. When you remove someone from the loop, you remove a manual gatekeeper.
NIST’s initiative should therefore not be seen as a cleanup effort, but as a call to formalize controls before the spread of the agent gets out of control.
A big change
More broadly, the AI ​​Agent Standards Initiative reinforces the profound truth that APIs are no longer back-end pipes. They are the operating system of the modern enterprise. AI agents amplify this reality by turning every API into a potential action point.
If endpoints, networks and cloud infrastructure define the first three pillars of cybersecurity, AI-driven APIs define the fourth. Estimating is the first step to acknowledging that fact. Execution must follow.
For organizations, the message is clear. You can’t rule what you can’t see. You can’t safely scale AI without securing the API paths that power it. The time to align innovation with enforceable standards, proprietary controls and runtime protection is now, not after the first agent-driven breach of contract makes headlines.
We have installed the best encryption software.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the tech industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you would like to contribute find out more here:



