Your AI Security Assessment Is Really an IAM Maturity Test
Published April 3, 2026
I stumbled upon an AI security questionnaire recently. It was being used to evaluate AI vendors and internal AI development projects at an enterprise. Sixteen categories. Risk assessment, access controls, credential management, data governance, privacy, model security, traceability, compute infrastructure. The whole thing was framed as purpose-built for the AI era.
I got about halfway through and stopped. Went back to the beginning. Read it again more carefully, because I was having one of those moments where you realize you have seen this movie before, just with different actors.
These were IAM maturity questions that someone had rewritten with the word "AI" in front of them.
The questions I recognized
The assessment asked how AI credentials are stored, rotated, and protected from misuse. That is secrets management. I have been asking organizations that same question about service accounts and API keys for years. The credential holder changes. The discipline does not. You vault it, rotate it, monitor for leakage. Whether it belongs to a person, a pipeline, or an agent is irrelevant to the control.
It asked about RBAC and time-limited access for AI systems. That is just-in-time privileged access. Should an AI agent have standing admin permissions? Same question as whether a DevOps engineer should. Same answer too. Most organizations have not implemented it for their human users, let alone for machines.
End-to-end traceability from request to inference to output. Session management and audit logging. On most consulting engagements I have done, one of the first things I check is whether an organization can trace what a service account did in production. Most cannot. Adding AI agents to the mix does not fix that gap. It widens it.
Data ownership, consent, restrictions on how customer data gets used. Privacy and consent management. Whether a human analyst or a model is processing the data, the governance obligation is identical.
I went through all sixteen categories. Roughly two-thirds mapped to capabilities that already exist in established IAM programs. Credential protection, access control, lifecycle management, monitoring, data governance. None of it was invented for AI. AI just made it show up on a questionnaire with a different label.
What the vendors are saying
If you have been anywhere near LinkedIn lately, you have seen it. Every identity vendor came out of RSAC this year with an agentic AI announcement. The Cloud Security Alliance published three separate surveys in the span of a couple months. The numbers are alarming and real: 78% of organizations have no formal policies for creating or removing AI identities. 68% cannot distinguish between human and AI agent activity, even though 73% expect agents to become vital within a year. Only 23% have a formal strategy for agent identity management. And 92% do not trust their legacy IAM solutions to handle the risks.
That last number is doing a lot of heavy lifting in vendor pitch decks right now. But read it without the sales context. 92% of organizations do not trust their own IAM foundations. Not their AI-specific tooling. Their foundations. The stuff that has been there for years.
That tracks with what I see on engagements. Organizations struggling with AI identity are not hitting some novel access pattern the industry has never encountered. They are hitting the same walls they have been hitting with non-human identities since they moved to the cloud. Service accounts with static credentials that have not been rotated in years. No inventory of what machine identities exist. Privileged access that is standing by default instead of time-bound. Lifecycle processes that only work when a human manually creates and removes the identity.
Adding AI agents on top of that compounds existing risk at a speed and scale most organizations are not ready for. Agents spin up and down faster than humans. They create credentials programmatically. They chain together across systems. If your secrets management and lifecycle automation and monitoring are not mature enough to handle the non-human identities you already have, agents will make it worse, fast.
78% have no formal policies for AI identities. In my experience, most organizations do not have great policies for service accounts either. The AI governance gap is sitting on top of a non-human identity governance gap that was already there. AI did not create the problem. It inherited it.
What is genuinely new
I want to be fair about this. Some of what was in that questionnaire has no IAM equivalent, and pretending otherwise would be dishonest.
Model governance. Versioning AI models, tracking training data provenance, documenting configuration dependencies. IAM governs access to systems. Model governance is about governing what the system does with the access it has. Different problem.
Prompt security. Injection attacks against model inputs. Version control on prompt templates. Preventing an agent from being manipulated through carefully crafted instructions. There is no PAM or IGA control that touches this.
Output verification. Human-in-the-loop review for catching hallucinations and bias. Not an identity control. It is a quality and trust mechanism that sits somewhere between security, product management, and data science. It ended up on an AI security assessment because someone has to own it, and nobody knows who.
Memory management. Preventing models from retaining sensitive context across sessions or getting poisoned through accumulated inputs. No precedent for this anywhere in the identity playbook.
These are real, they matter, and they probably do need new frameworks and tooling. But they were about a third of that assessment. The other two-thirds was IAM work that should have been done already.
Where this actually starts
There is a lot of energy right now around agent identity products and agentic governance frameworks. I get it. The problem feels new, so the solution must be new.
But every time I look at an organization worried about AI identity, the same foundational questions come up first. Do you have a complete inventory of your non-human identities? Can you rotate credentials within hours during an incident? Is privileged access to production time-bound with approval, or permanent by default? Do you have lifecycle processes that work for identities nobody manually created? Can you trace a non-human action back to a responsible owner?
Most organizations I work with cannot answer yes to all of those. Some cannot answer yes to any. And these are organizations with real security budgets and established identity teams. The AI security questionnaire I saw took those foundational capabilities for granted. It did not ask whether you have secrets management. It asked how your AI credentials are vaulted and rotated. It assumed the basics were already in place.
The organizations that are going to handle agentic identity well are the ones that already manage non-human identities well. That maturity transfers directly. The ones that skip ahead to agent-specific tooling without getting the foundation right will end up where the industry always ends up: good tools sitting on top of a governance vacuum, producing audit findings instead of security outcomes.
AI made the urgency real. But most of the work is not new. It is the stuff that has been sitting on the roadmap, getting deprioritized because there was never a forcing function. That forcing function just showed up.
AXIS is a free IAM maturity assessment covering nine domains including non-human identity governance. It benchmarks against industry peers and produces board-ready reports. No login required.
Sources
- Cloud Security Alliance / Oasis Security, "The State of Non-Human Identity and AI Security," January 2026
- Cloud Security Alliance / Aembit, "Identity and Access Gaps in the Age of Autonomous AI," March 2026
- Cloud Security Alliance / Strata Identity, "Securing Autonomous AI Agents," February 2026