Over the past few months (and last week) some recurring developments caught my attention. Firstly, the European Union’s acceleration of efforts to reduce dependency on U.S. technology platforms, including cloud infrastructure and payment networks such as Visa and Mastercard. And as of last week, the U.S. Department of Defense reportedly ordered agencies to phase out Anthropic models within a defined window. Although these two events are not directly linked the do reveal something very interesting. Modern organizations no longer fully understand their technology dependencies. And in the age of AI, that problem is becoming more serious.
The Old Model of Vendor Risk
For most of the last twenty years vendor risk was relatively straightforward. If your company used a vendor, you signed a contract. Procurement reviewed it, security assessed it and legal negotiated it. You performed a vendor risk review and maybe you looked at SOC2 or ISO certifications. Your dependency list was essentially the list of vendors you paid. Not 100% perfect, but manageable. That model worked reasonably well in the enterprise software era, but has now started to break down in the AI era.
AI Is Not a Product
Many organizations still think about AI vendors the same way they think about software vendors. But AI systems are rarely standalone products. They are compositional systems, and this is even more so once you introduce agentic work.
A single workflow might look something like this:
Application
→ SaaS platform
→ analytics engine
→ AI inference service
→ model provider
→ cloud infrastructure
Your organization may never sign a contract with the model provider. But your workflow depends on it. Your vendor’s vendor’s vendor may ultimately be the one actually running the model. Very few organizations have mapped (or even tried to understand) that chain.
The Illusion of Approval
Enterprises often believe they have approved their AI vendors because we trial, POC, vet and review them. What they have actually approved is an technical interface – a very smart and intelligent one at that – that itself could consist of:
→ multiple models
→ different providers
→ dynamic routing
→ fallback systems
→ region-specific infrastructure
→ who know what storage and data collection
Even the vendor itself may change models over time for various reasons (sentiment, commercials, rivalry, etc.) That means the underlying dependency can shift without the customer (you) even realizing it. Security teams believe they know what is running. In many cases they don’t (they can’t).
Why AI Dependencies Are Hard to See
When SaaS adoption exploded, we saw the rise of shadow IT. But shadow IT left a trace. You could track a new SaaS product by things like, a new login, a new identity integration, new network traffic or a new invoice.
Eventually security teams caught up using a combination of CASBs, network monitoring, SSO enforcement, spending analysis, etc.
But, once again, AI is different. AI usage can, and often is, be embedded inside other platforms. It can be invoked dynamically through APIs. It can sit inside analytics features, copilots, or agent workflows. Traditional logs may not even tell you which model was used. The dependency is now invisible.
A Real-World Stress Test
The Anthropic phase-out scenario inside the U.S. government is essentially a stress test that we can watch in near real-time. Imagine discovering that one of your AI dependencies must disappear within months. Not just replaced, but removed. That sounds manageable until you remember something important.
Models are not interchangeable.
Switching model providers changes things like:
→ output structure
→ latency characteristics
→ safety filtering
→ fallback systems
→ reasoning behavior
→ hallucination profiles
Applications built around one model may behave very differently with another. There is a very good likelihood that agent workflows may break and security assumptions may change.
Rotating an API key is easy, untangling hidden dependencies is not and re-testing everything that was built is not.
The European Perspective
Across the Atlantic, Europe is confronting a similar dependency problem from a different angle. European policymakers have become increasingly concerned about structural dependence on foreign digital infrastructure. Cloud platforms are a major example.
Today much of Europe’s digital economy runs on infrastructure operated by a small number of U.S. hyperscale providers — primarily Amazon Web Services, Microsoft Azure, and Google Cloud. European Parliament research has highlighted several concerns around this concentration.
First, market concentration/dominance. A handful of hyper-scalers control a large share of global cloud infrastructure capacity and investment.
Second, strategic dependency. Cloud platforms now underpin critical sectors such as banking, government services, healthcare, research, industrial supply chains and now AI development environments. When those systems depend on foreign-owned infrastructure, policymakers increasingly see it as a strategic risk.
Third, jurisdictional exposure. It is not secret that European and US lawmakers sometimes have veery difference views on policy. Even when European data is stored inside the EU, providers headquartered elsewhere may still be subject to foreign legal frameworks. This creates tension with European data protection frameworks such as GDPR.
Finally, there is the economic dimension. A large portion of European cloud spending ultimately flows to companies outside Europe. As cloud adoption increases — particularly with AI workloads — that financial flow continues to grow.
For policymakers, this raises concerns about long-term technological competitiveness and economic sovereignty.
Payments Tell the Same Story
The same structural dependency appears in payments infrastructure. A large share of European card payments still flows through Visa and Mastercard, both U.S. networks. This has triggered discussions about building alternative European payment systems. Again, the goal is not necessarily to remove these providers, but to ensure that critical infrastructure does not depend entirely on systems controlled elsewhere. Ok, maybe it also has to do with the 61% of payment processing handled by US companies when considering the 4.7 trillion USD spent in 2023.
AI Will Deepen These Dependencies
AI adds complete new layer to this picture. Training and running modern models requires enormous computing infrastructure, specialized chips, Large-scale data centers and distributed inference platforms. The companies that operate these environments today are largely the same hyperscale providers already dominating cloud infrastructure. That means the AI boom may reinforce existing infrastructure dependencies rather than diversify them.
Which brings us back to your enterprise.
Traditional vendor lock-in was mostly technical. You depended on a database, a storage system, an operating system. Migrating away was painful, but predictable.
AI lock-in is different. It is behavioral lock-in. Not impossible, just must must harder.
Applications built around a particular model assume certain patterns of how the model responds, how it structures output and how it interprets prompts. Changing the model can change the behavior of the entire system. That is a much deeper form of dependency.
So What Can Organizations Actually Do
None of this means companies should stop adopting AI. But it does mean organizations should start thinking about AI dependencies more deliberately.
- Push for Transparency: The first step is pushing vendors for transparency, or just pushing for some sanity. Sidenote – I am hopeful with statements like these (but that is for another time). Organizations should
askdemand vendors disclose which models they rely on, where those models run, and whether fallback providers are used if a model becomes unavailable. Many vendors struggle to answer these questions clearly today, which already tells you something about the visibility you actually have into the systems you depend on. - Own the Control Points: The second step is identifying the control points you actually own. While companies may not control the AI provider itself, they can control the surrounding systems. This includes the data entering models, the orchestration layers that call them, agent frameworks or automation pipelines, and the systems that consume the outputs. If the only place where governance exists is at the vendor contract level, then operational visibility is likely very limited.
- Do a Kill Test: The third step is running dependency kill tests. This simply means simulating the removal of an important AI dependency. Disable the API key or block the endpoint in a controlled environment and observe what happens. Some systems will break immediately, while others may silently degrade. In many cases organizations discover dependencies they did not even realize existed.
- Understand Execution: Organizations should map how AI is actually used inside their environment rather than simply listing vendors. (Can anyone see a new governance role here – CAIO ). The important questions are which systems are calling models, which endpoints are involved, and what data is being sent.
- Data Control: Finally, companies need to understand what happens to their data once it enters an AI system. Where is it stored? How long is it retained? Is it used for training or evaluation? Can it be removed if needed? Data often passes through several layers such as inference services, safety filters, and monitoring systems before a response is returned, which means copies of that data may exist in more places than expected. If an organization does not understand how its data can be removed from a system, then it does not fully understand the dependency it has on that platform.
The 10-Second Takeaway
The AI revolution is not just changing how software works. It is changing how technology dependencies are structured. Enterprises once depended on software vendors and now they depend on entire technology supply chains. Vendors, their vendors, model providers, cloud inference platforms, training ecosystems and more.
Most organizations still believe they understand their dependencies. But many only understand the surface layer. And the next forced migration may not come with a six-month warning.

