Silhouettes of two businesspeople shaking hands in a futuristic cityscape with a glowing digital sphere surrounding them.
March 13, 2026
5 Min Read

New Tools, Old Rules: Closing the AI Governance And Control Gap - Part 2

Richard Bird
Chief Security Officer

Part 2 of my 4-part series on how legacy governance, compliance, and frameworks can’t contain AI features, services, or agents builds from one simple truth.

Approval is not governance.

Your Best-Controlled AI Is Still a Toxic Super-User

Why Approval Is Not Governance

In the first part of this series, we looked at how AI breaks one of the most fundamental assumptions of enterprise governance: that accountability can be anchored to identity. When systems act continuously, indirectly, and across many users, “Who did this?” becomes a weak question.

But the breakdown does not stop at identity.

AI also collapses entire control structures that organizations rely on to prevent fraud, error, and abuse. In particular, it exposes how deeply governance depends on role separation and the illusion that approval equals control. When AI enters the picture, segregation of duties becomes theoretical, and approval-centric governance becomes theater.

The result is a system that is perfectly compliant on paper and dangerously unconstrained in practice.

Why Segregation of Duties Exists in the First Place

Segregation of duties is not a technical construct. It is a governance principle rooted in human behavior. The idea is simple: no single individual should be able to request, approve, execute, and review the same action. By separating these roles, organizations reduce the risk of error, fraud, and abuse, whether intentional or accidental.

This principle has shaped everything from financial controls to access provisioning to change management. It isn’t just a good idea. Segregation of duties is a foundational control method that pre-dates the arrival of digital technology. But it assumes that roles are discrete, actions are sequential, and responsibilities can be cleanly divided among people and teams.

That assumption no longer holds when AI is introduced into core workflows.

AI Collapses Roles by Design

An AI system does not submit a request and wait for approval. It does not execute a task and then hand it off for review. It ingests data, makes decisions, takes actions, and summarizes outcomes as part of a single continuous process.

From a governance perspective, this is a collapse of roles.

The same system can recommend an action, perform the action, and report on its own performance. In traditional control language, it becomes a requester, approver, executor, and reviewer all at once. No amount of procedural documentation changes that reality, and the inescapable truth is, AI becomes a dynamic representation of a toxic combination. Maybe many toxic combinations.

Organizations often respond by wrapping AI systems in familiar governance artifacts. Approval workflows are added, and oversight committees are formed. Model inventories are created. These steps feel responsible, and they are not useless. But they do not reintroduce segregation where the architecture itself has eliminated it.

This is how well-controlled AI systems quietly become toxic super-users.

The Myth of the Well-Governed Super-User

Security and audit teams are well-versed in the risks of privileged access. A “toxic” super-user is one whose access spans too many functions, creating opportunities for abuse or failure that no single control can mitigate. The standard response is to reduce privileges, split roles, and increase oversight while introducing fail-safe technology measures such as PAM tools.

AI breaks this response model badly.

Even when permissions are carefully scoped, the system’s functional role remains expansive. It is not the breadth of access alone that creates risk. It is the concentration of decision-making authority inside a single system.

This is uncomfortable to acknowledge because it means that perfect execution of legacy controls does not produce safe outcomes. You can follow every segregation rule and still end up with a system that no framework was designed to govern.

The control diagrams still look correct. The system behavior does not.

Approval Feels Like Governance, but It Isn’t

Faced with this reality, many organizations lean harder into approval or sanctioning. AI governance boards proliferate, but are usually juggling an overload of review requests for use cases or tool selection. Models are signed off. Documentation grows, but it is still nothing more than wrapping paper around a governance process never built for, nor intended for, AI services, features, and agents.

Approval feels reassuring because it creates a moment of decision and the false sense of comfort that comes with saying, “Look, we did something.” Someone said yes, and someone owns that choice, and the organization can point to a check mark and say “governance happened”!

The problem is timing.

Approval freezes risk at the moment it matters least and is simply a snapshot in time for a technology that, by design, is built to be dynamic and to change over time. AI risk does not emerge when a system is approved. It appears when the system is integrated, scaled, adapted, and trusted. It arises through interaction with real data, real users, and real incentives.

Once approved, systems can evolve faster than governance processes can keep up. Changes happen incrementally and continuously. New data sources are added, and new use cases emerge for choices that you’ve already approved. Behavior shifts, and by the time an approval board reconvenes, the system is already operating in a materially different risk context.

The absence of truly evergreen lifecycle governance processes is a sin of the past that has arrived to collect the bill, in the form of AI.

Approval creates comfort. It does not create constraints.

Governance Theater and the Illusion of Safety

This gap between approval and behavior produces what can best be described as governance theater. The organization is busy governing artifacts rather than outcomes. Meetings are held where decisions are recorded. AI controls are audited, but with barely any trustworthy evidence.

Meanwhile, the system continues to act with increasing autonomy.

This is not a failure of effort or intent. It is a structural mismatch between how governance has historically been applied and how AI systems actually operate. Governance mechanisms optimized for static systems cannot keep up with systems that learn, adapt, and influence decision-making across the enterprise.

The danger is not that governance disappears. The danger is that it becomes performative. Leaders believe risk is being managed because the rituals of governance are being observed.

Moving Segregation Up the Stack

If segregation of duties cannot be enforced at the execution level, it must be implemented at a higher level. This requires a shift in how organizations think about separation of control.

Instead of separating tasks, organizations must separate oversight functions.

One group defines objectives and acceptable boundaries. Another monitoring system independently assesses behavior and outcomes. A third party has the authority to intervene, override, or shut down systems. These roles should not sit in the same reporting line, and they should not be optimized for speed or convenience.

This is segregation of oversight, but not a segregation of duties in the traditional sense. It does not pretend that execution can be cleanly divided for AI. It accepts architectural reality while preserving governance intent.

Governance as a Relationship, Not an Event

The deeper lesson here is that governance cannot be treated as a moment in time. Approval is an event, making governance a relationship between the organization and an evolving system.

This requires continuous attention, not constant meetings. It requires monitoring behavior and impact, not just configuration and compliance. Most importantly, it requires leadership to accept that governance is ongoing work, not a box to be checked before deployment and then never reviewed again.

AI does not eliminate the need for governance. It exposes the cost of treating governance as static.

In the next part of this series, we will shift from control structures to data itself. We will examine how AI breaks privacy and data protection models not by moving data, but by changing its meaning. And we will explore why organizations can comply perfectly with data protection rules while still losing something far more valuable.

Nothing leaks. And yet, something is gone.

Newsletter

Occasional updates from  Singulr

See how Singulr helps you stay ahead in AI innovation

In your personalized 30-minute demo, discover how Singulr helps you:
eye logo

Gain complete visibility across all three AI vectors in your environment

meter logo

Experience Singulr Pulse™ intelligence that keeps you ahead of emerging AI risks

search logo

See AI Red Teaming in action as it identifies vulnerabilities in real-time

tick logo

Witness runtime protection that safeguards your data without slowing AI innovation

Gradient background transitioning from deep purple to a lighter violet shade.