Silhouettes of two businesspeople shaking hands in a futuristic cityscape with a glowing digital sphere surrounding them.
June 5, 2025
5 Min Read

AI Inside the Trust Boundary: Why Traditional Security Models Are Failing

Richard Bird
Chief Security Officer

Newsletter

Occasional updates from  Singulr

In an era where artificial intelligence (AI) is rapidly being integrated into enterprise systems, a critical security concern has emerged: AI agents operating within the traditional trust boundaries of organizations. This shift challenges longstanding security architectures and necessitates a reevaluation of how we protect sensitive data and systems.

The Erosion of Traditional Trust Boundaries

Historically, organizations have relied on clear demarcations between trusted internal networks and untrusted external ones. Security measures, such as firewalls, network segmentation, and access controls, were designed to protect the perimeter. However, the integration of AI-enabled applications within these boundaries has introduced new vulnerabilities.

These AI agents often have extensive access to internal systems and data, operating with a level of autonomy that traditional security models were not designed to manage. As a result, the very tools meant to enhance efficiency and decision-making are now potential vectors for security breaches.

A Wake-Up Call from Industry Leaders

Pat Opet, Chief Information Security Officer at JPMorgan Chase, has been vocal about the risks associated with this paradigm shift. In an open letter to third-party suppliers, Opet emphasized the urgency of rethinking security models in the age of AI. He stated, "Traditional measures like network segmentation, tiering, and protocol termination were durable in legacy principles but may no longer be viable today in a SaaS integration model." JPMorgan Chase

Opet's concerns are not theoretical. He highlighted that "over the past three years, our third-party providers experienced several incidents within their environments," necessitating swift and decisive action to mitigate threats. CPOstrategy+2QA Financial+2Cybersecurity Dive+2

The Alarming Rise in AI-Related Intrusions

Recent data underscores the severity of the issue. According to the 2025 Data Breach Investigations Report (DBIR), 36% of system intrusions involved AI components. This statistic reflects a significant increase in AI-related security incidents, highlighting the need for immediate attention to AI governance and security.

Rethinking Security in the AI Era

To address these challenges, organizations must adopt a multifaceted approach:

  1. Implement Advanced Authorization Methods: Move beyond traditional access controls to dynamic, context-aware authorization mechanisms that can adapt to the unique behaviors of AI agents.

  2. Enhance Detection Capabilities: Develop and deploy sophisticated monitoring tools capable of identifying anomalous activities by AI systems within the network. Note - do we want to talk about real-time prompt monitoring, redaction etc. versus summarizing anomalous activity, which suggests after-the-fact effort?

  3. Establish Robust AI Governance Frameworks: Create comprehensive policies that define the acceptable use, monitoring, and management of AI applications, ensuring accountability and compliance. Note - add real-time monitoring and management of user or agent behavior to look for unwanted AI interactions and data access.

  4. Foster Collaboration Across the Supply Chain: Engage with third-party vendors to ensure they adhere to stringent security standards, recognizing that the security posture of partners directly impacts your organization's risk profile.

Conclusion – It’s Time to Tear Down the Walls

The age of the clearly defined digital perimeter is over. It’s time to tear down the walls, redefine trust, and build security models that account for the unique challenges of AI. Because if you don’t, the next breach won’t just be a surprise – it’ll be an inevitability.

See how Singulr helps you stay ahead in AI innovation

In your personalized 30-minute demo, discover how Singulr helps you:

Gain complete visibility across all three AI vectors in your environment

Experience Singulr Pulse™ intelligence that keeps you ahead of emerging AI risks

See AI Red Teaming in action as it identifies vulnerabilities in real-time

Witness runtime protection that safeguards your data without slowing AI innovation

Gradient background transitioning from deep purple to a lighter violet shade.