AI in AM Weekly — Vendor Risk Week (4 May 2026)
Last week's AI news was a procurement story. Asset owners should be reading their AI contracts. Six stories that move vendor risk from a slide to a register row.

Last week's AI news was a procurement story. Seven days earlier the dominant frame was governance — who is responsible when an agent acts autonomously, and how do you write that down. This week the frame moved one layer up the stack. The labs themselves are bets. Anthropic outlasted six weeks of US government procurement pressure and walked away with sixty five billion dollars of fresh capital from two cloud providers. OpenAI ended Microsoft cloud exclusivity, abandoned its first party data centres, and missed its own revenue targets in the same week. For asset owners, AI vendor selection is no longer a procurement footnote. It is a register row, with a named owner, a named alternate, and a migration plan. Six stories from 25 April to 1 May, and what each one tells asset owners about vendor risk.
Anthropic outlasted the White House
Six weeks ago the White House placed Anthropic under a supply chain risk designation because Anthropic refused to drop the usage policy restrictions on federal use, including surveillance and autonomous weapons. On 29 April, Axios and The Decoder reported the White House is now drafting guidance to help federal agencies onboard Anthropic models including Claude Mythos. The pivot followed Amazon committing twenty five billion dollars and Google committing up to forty billion dollars to Anthropic in the same week. Sixty five billion dollars in five business days.
Vendor selection used to be price, capability and SLA. The Anthropic standoff shows that a vendor's stated values can outlast the largest customer on earth, because two cloud companies will write a sixty five billion dollar cheque to back those values. For asset intensive organisations choosing an AI vendor, the question shifts to whether the vendor's values, capital structure and policy posture align with where you want your data sitting in five years. That is a procurement question, not an IT question.
OpenAI, Microsoft and AWS shake up the cloud map
On 28 and 29 April, OpenAI and Microsoft amended their agreement. Azure exclusivity ended. The AGI clause was removed. The deal now runs to 2032 even if OpenAI achieves AGI. OpenAI is now available on AWS via Bedrock and a new service called Bedrock Managed Agents — agents with persistent memory, custom workflows and multi step orchestration, all running on AWS (TLDR AI, 29 April).
If your organisation has been running OpenAI through Azure under any kind of exclusivity arrangement, the contract you signed is no longer the contract Microsoft is operating under. Multi cloud OpenAI is the new default. The practical move is to re read the data residency, exclusivity and termination clauses in your existing OpenAI contracts and decide whether the new arrangement creates options you should now exercise.
AI eval costs are the new compute bottleneck
A deep dive published 30 April examined how AI evaluation costs have escalated to levels comparable to or exceeding training costs, with some individual eval runs costing tens of thousands of dollars. Cost distribution across models and tasks is uneven and inefficient. The field is calling for standardised documentation and data reuse to reduce the cost burden.
If you ran an agentic pilot in the last quarter and the bill surprised your CFO, this is part of the answer. Eval is the line item nobody priced in. Most asset management AI business cases have a number for inference cost and sometimes a number for fine tuning, but no separate line for evaluation. At production scale, evaluation can be the largest component. Add eval cost to the AI pilot business case template, and ensure the eval methodology is documented well enough to be reused across pilots.
OpenAI's vendor risk profile shifted hard
OpenAI has effectively abandoned the original first party Stargate data centres in favour of leasing compute, after partners could not agree on who would have ultimate control. Some analysts now estimate the company could run out of cash by mid 2027. In the same week, news broke that OpenAI missed its own revenue and user targets, and shares of several OpenAI tied companies dropped on Tuesday (TLDR AI, 30 April; TLDR, 28 April).
The model your organisation is building agents on is a venture bet, not a utility. If your AI architecture has a single vendor dependency at the foundation, that is a vendor risk that is currently not scored on your asset risk register. Concrete move: write the migration plan you would execute if your primary AI vendor were unavailable for 90 days. If you cannot write that plan, the dependency is not at a level you can defend.
Long running agents have a production playbook
A 26 minute deep dive on long running agents published 29 April documented practical patterns for agents that survive across many context windows and sandboxes, recover from failure, leave structural artefacts behind, and resume work where they left off. The same piece explained how to use these patterns today without writing the whole framework from scratch.
This is the production readiness follow up to the always on agent wave we covered last week. The capability has now been documented at the architectural pattern level. If your asset management organisation is piloting agents against the CMMS, EAM, historian or document repository, this is the moment to commit to a long running agent architecture rather than a stateless one. Stateless agents will plateau quickly. Long running agents are the path to genuine workflow automation.
Compute sovereignty is becoming a real option
Two related stories. Google opened TPU sales to select customers, allowing them to install Tensor Processing Units in their own data centres rather than depending on Google Cloud. The company has existing TPU deals with Anthropic and Meta. Separately, Mistral released Medium 3.5, a 128 billion parameter dense model that powers Vibe remote agents on four GPUs and scores high on SWE Bench Verified (TLDR AI, 30 April).
Sovereign and non frontier lab AI is moving from hypothetical to procurable faster than procurement teams can keep up. Asset owners with data residency or critical infrastructure constraints now have credible alternatives to running everything through US frontier labs. Worth opening a parallel evaluation track for sovereign and edge options alongside the frontier lab default.
What to do this week
- Add AI vendor risk as a row on the asset risk register, with a named owner and a named alternate vendor.
- Re read any OpenAI via Azure contract for exclusivity, residency and termination clauses, and decide whether the multi cloud option creates a contract renegotiation opportunity.
- Add an eval cost line item to every AI pilot business case template.
- Write the 90 day migration plan you would execute if your primary AI vendor became unavailable. If you cannot write it, the dependency is at a level you cannot defend.
- Open a parallel evaluation track for sovereign and edge AI options against your default frontier lab path.
Vendor risk used to be a slide. This week it became a register row.
SAS Asset Management provides advanced analytics, expert asset management services and maturity assessments to help asset owners realise their value.
Read next
AI in AM Weekly — Governance Week, the immediate predecessor frame to this week's vendor risk lens.
Talk to us
If any of these moves belong on your risk register or business case template, get in touch.

AI in AM Weekly — Vendor Risk Week (4 May 2026)

.jpg)