Patrick Opet's open letter calls on SaaS vendors to urgently strengthen their security practices.
https://www.jpmorgan.com/technology/technology-blog/open-letter-to-our-suppliers
In summary, he calls out how SaaS models are fundamentally reshaping how companies integrate services and critical data, and are eroding tried-and-true security practices that enforce strict segmentation between a firm’s trusted internal resources and untrusted external interactions.
As an example, he cites an embedded AI feature that demonstrates a new architectural pattern that results in often unchecked interactions between third-party services and sensitive internal resources.
We see this with the AI service and usage data we discover across our customer base:
Our customers ask us "Fill_in_the_blank SaaS provider has turned on AI. How do I assess the risk?"
Read our CTO Abhijit Sharma's blog Why Enterprise AI Discovery is Hard?
https://www.singulr.ai/blogs/why-enterprise-discovery-hard
Discovering and assessing the contextual information that helps your security and compliance team "connect the dots" to find and then vet the risk of embedded AI services - IS EVEN HARDER. But we have a way to automate that. (Topic of Future Blog)
Reach out and request a demo - and we'd be happy to help you think about how to discover and assess the risk assessment of embedded AI features.
https://www.singulr.ai/request-a-demo