Patrick Opet's open letter calls on SaaS vendors to urgently strengthen their security practices.
https://www.jpmorgan.com/technology/technology-blog/open-letter-to-our-suppliers
In summary, he calls out how SaaS models are fundamentally reshaping how companies integrate services and critical data, and are eroding tried-and-true security practices that enforce strict segmentation between a firm’s trusted internal resources and untrusted external interactions.
As an example, he cites an embedded AI feature that demonstrates a new architectural pattern that results in often unchecked interactions between third-party services and sensitive internal resources.
At Singulr AI, we see this in real time
We see this with the AI service and usage data we discover across our customer base:
- 10s to 100s of SaaS applications with AI features added after initial SaaS product purchase
- SaaS vendors turning on AI features often as default
- Embedded AI features that may use a wide variety of LLM and 4th party sub-processors that touch and consume and may leak sensitive information.
- SaaS vendors don't alert customers when they update AI capabilities like they do when they announce feature upgrades.
Our customers ask us "Fill_in_the_blank SaaS provider has turned on AI. How do I assess the risk?"
Finding the AI technology inside a SaaS app is hard.
Read our CTO Abhijit Sharma's blog Why Enterprise AI Discovery is Hard?
https://www.singulr.ai/blogs/why-enterprise-discovery-hard
Discovering and assessing the contextual information that helps your security and compliance team "connect the dots" to find and then vet the risk of embedded AI services - IS EVEN HARDER. But we have a way to automate that. (Topic of Future Blog)
Reach out and request a demo - and we'd be happy to help you think about how to discover and assess the risk assessment of embedded AI features.
https://www.singulr.ai/request-a-demo
.webp)









