Most organizations approaching AI governance are doing what they have always done when new technology emerges: extending familiar rules, controls, and approval processes, and hoping they still work. This series starts from a less comfortable premise. AI does not simply stress existing governance models; it exposes the assumptions on which they were built.
AI is either transformational or it is not. If AI is as transformational as leaders like Sam Altman and Jensen Huang would have us believe, why would anyone expect that the successive waves of governance, controls, and security frameworks that happened over the evolutionary periods that include mainframe, midrange, client/server, virtualization, and cloud would be sufficient to contain AI?
As generative and agentic systems move from tools to actors, controls designed for predictable human behavior begin to produce confidence without control. Over the next four posts, I will argue that the biggest AI governance failures are not caused by reckless adoption or missing policies, but by the quiet mismatch between how we think governance works and how AI systems actually behave in production.
Part 1: There Is No User to Blame Anymore
Why Identity-Centric Governance Fails AI
For decades, enterprise governance and security have been built around a simple, powerful idea: if you can identify who did something, you can control risk. Identity became the anchor point for accountability. Access controls, audit trails, least privilege, and segregation of duties all flowed from that assumption. If something went wrong, the question was straightforward. Who accessed the system? Who approved the action? Who violated the policy?
AI breaks that model almost immediately.
This is not because identity and access controls suddenly stop working. They still function exactly as designed. The problem is more subtle and more dangerous. AI introduces systems that act continuously, indirectly, and often on behalf of many people at once. When outcomes result from inference rather than intent, identity no longer explains behavior meaningfully. Governance structures remain in place, but accountability quietly erodes.
Most organizations do not notice this erosion at first.
The logs are still there.
The access reviews still happen.
Service accounts are properly scoped and documented.
Everything looks familiar.
That familiarity is the trap.
Identity Was Never About Access. It Was About Accountability.
Identity and access management has always been more than a technical control. It is a governance mechanism. The reason identity matters is not simply to grant or deny access, but to assign responsibility. Identity allows organizations to answer fundamental questions: who is accountable, who can be held responsible, and who must explain an outcome.
That model assumes three things. First, that actors are discrete. Second, that actions are intentional. Third, that behavior can be reasonably predicted based on role and permission. AI violates all three assumptions.
When an AI system generates a recommendation, transforms data, or takes an action, whose intent does that reflect? The developer who trained the model? The user who provided the prompt? The business unit that integrated it into a workflow? The answer is often “all of the above,” which in governance terms is another way of saying “no one.”
Organizations try to force AI back into the identity model by assigning service accounts, labeling systems as users, and capturing logs. This preserves the appearance of control, but it does not restore accountability. When something goes wrong, attribution becomes delayed, contested, or meaningless. Governance becomes an exercise in reconstructing activity rather than in understanding responsibility.
Least Privilege Was Designed for Predictable Work
Least privilege is one of the most widely accepted principles in security. It assumes that tasks can be defined in advance and that permissions can be tightly scoped to support them. This works well when systems behave deterministically, and roles are stable.
AI systems do not behave that way.
By design, AI explores, infers, and adapts. It is often impossible to predict in advance what data sources a system will need to access or what operations it will perform to achieve a goal. Organizations quickly encounter a choice. Either they restrict permissions so tightly that the system becomes useless, or they broaden access so the system can function.
Most choose the latter, then document the former.
The result is a quiet normalization of over-permissioning. Least privilege remains a stated goal, but not an operational reality. Reviews become formalities. Exceptions accumulate. Over time, the organization stops treating least privilege as a constraint and starts treating it as a compliance artifact.
This is not a failure of discipline. It is a mismatch between a control model and a system that was never designed to operate within it.
The Comfort of Logs and the Illusion of Control
One of the most dangerous aspects of identity-centric governance in AI environments is the false sense of control it creates. Logs are captured. Access is reviewed. Policies are enforced. From a distance, everything appears orderly.
But logs answer the wrong questions.
They tell you what happened, not whether it should have happened. They tell you which account was used, not who is accountable for the outcome. They provide visibility without authority and transparency without containment.
This creates a particularly risky dynamic for leadership. When controls are in place and reports are generated, it feels irresponsible to question whether governance is sufficient. The organization assumes that if something were truly wrong, the controls would surface it. In reality, AI operates in the gaps between assumptions. By the time a problem becomes visible, the organization is already reacting rather than governing.
The control still exists. The constraint does not.
Why This Fails Quietly
The failure of identity-centric governance in AI contexts does not announce itself. There is no obvious breach. No single moment of violation. Instead, accountability slowly becomes diffused. Decisions are made without clear ownership. Outcomes emerge without clear intent.
When leaders finally ask who is responsible, the answer is complex, technical, and unsatisfying. At that point, governance has already failed in its most basic function.
This is why many early AI incidents feel so challenging to address. Not because controls were absent, but because they were never designed to explain or constrain the behavior that occurred.
What This Means for Governance
The conclusion is not that identity and access controls are obsolete. They still matter. But they can no longer carry the full weight of accountability on their own. AI forces a shift from identity-centric governance to outcome-centric governance.
That shift is uncomfortable because it requires leadership decisions rather than technical configurations. It requires organizations to decide who owns AI outcomes, not just who operates systems. It requires accepting that some failures will not be traceable to a single actor and designing governance accordingly.
In the next part of this series, we will look at what happens when AI collapses not just identity assumptions, but entire control structures. When a single system plays the role of requester, approver, executor, and reviewer, segregation of duties becomes a diagram rather than a defense. And when governance focuses on approval instead of behavior, organizations mistake comfort for control.
Approval is not governance.
New Tools, Old Rules: Closing the AI Governance And Control Gap - Part 1

.webp)
.png)








