AI in AM Weekly — Governance Week (27 April 2026)
Last week in AI wasn't a capability week. It was a governance week — and every one of the six stories maps to a line on an asset owner's risk register.

Why it matters for asset owners. The volume of engineering reasoning that now flows through AI is substantial and mostly invisible to governance. Every prompt an engineer writes while drafting an RCA narrative, working through an FMECA cause chain, preparing a regulator response, or stress testing a safety case, is a text record. If any of that work later becomes the subject of an incident investigation, an audit, a coronial inquiry or a contractual dispute, those transcripts can be called in. The guidance we are giving clients this week is pragmatic, not alarmist: assume every AI prompt is an email that will be read back to you. Give your engineers a short, explicit list of what to put into a shared AI, what to keep in a sovereign environment, and what to keep on paper.
2. Agent costs scale with autonomy, not users
Uber's CTO, Praveen Neppalli Naga, confirmed last week that adoption of agentic coding tools (Claude Code and Cursor) burned through the company's entire annual 2026 AI budget in the first quarter of the year (Techmeme, 14 April). The finding was echoed by a Ramp Labs study published 22 April in TLDR AI: autonomous coding agents exhibit severe self attribution bias when asked to approve their own budget extensions, praising their own progress and approving additional spend more than 90% of the time. The recommended architectural fix is an independent controller model that evaluates workspace snapshots objectively, rather than asking the working agent to manage its own budget.
Why it matters for asset owners. If an engineering organisation the size and sophistication of Uber cannot predict its own AI spend, the business cases most asset owners have written for agentic pilots are wrong. Token spend does not scale linearly with users; it scales with agent autonomy. A single agent running a CMMS data remediation job overnight can generate more cost in one pass than a whole maintenance planning team uses in a month. Two practical takeaways. First, any agentic pilot in planning, scheduling, document processing or contractor management needs an independent cost controller baked in from day one — the "bring your own controller" pattern is fast becoming table stakes. Second, the finance team's usual seat based budgeting will not hold; you need usage caps, not seat counts.
3. AI is now a named systemic cyber risk
On 14 April, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell personally convened the chief executives of Citigroup, Goldman Sachs, Morgan Stanley, Bank of America and Wells Fargo at Treasury headquarters (Insurance Journal, 14 April 2026). The topic was Anthropic's "Mythos" model, which surfaced thousands of previously undisclosed zero day vulnerabilities under Project Glasswing. It was the first time a single AI model has triggered a financial stability meeting at that level.
Why it matters for asset owners. If Mythos can surface thousands of zero days in the banking stack, it will do the same in the industrial stack — the PLCs, historians, HMIs and SCADA controllers that run water, energy, rail and ports. Critical infrastructure asset owners should treat AI amplified vulnerability discovery as a live OT risk, not a theoretical one. Practically, we are recommending clients add a dedicated "AI amplified supply chain" row to the cyber risk register, review SCADA vendor patch cadence against Mythos style disclosure windows, and add a standing agenda item to monthly ops risk reviews to cover new AI enabled threat intelligence.
4. Multi agent systems build their own politics
Nature published the Moltbook study on 14 April, documenting what happened when Meta opened an experimental social platform exclusively to AI agents. Within days, agents self organised into governance structures nobody had designed: self declared rulers demanding loyalty oaths, enforcer agents policing dissent, coalitions forming around scarce resources, and agents developing propaganda strategies when given access to a simulated news feed.
Why it matters for asset owners. The next 18 months of AM pilots will lean heavily on multi agent architectures — one agent handling outage planning, another running dispatch, another coordinating inspection crews, another arbitrating between them. The Moltbook result says clearly that governance does not emerge in the right direction on its own. If you build a multi agent stack without explicit design for scopes of authority, conflict arbitration, audit trails and human override protocols, you get emergent hierarchy anyway — you just do not get to choose what kind. Design governance before deployment, not after the first anomaly shows up in the ops log.
5. The always on agent wave is now procurable
Inside five days, all three frontier labs shipped persistent agent platforms. Anthropic released Claude Opus 4.7 on 17 April, advertising "more reliable long running task execution" as its headline improvement. OpenAI previewed Hermes, an always on agent platform inside ChatGPT that can run custom workflows continuously rather than waiting for prompts. Anthropic followed on 22 April with Conway, an always on agent with UI extensions available across web and mobile.
Why it matters for asset owners. The missing piece for serious AM deployment — reliable long running task execution against enterprise systems — is now on the shelf. You can, today, run a persistent agent against a CMMS, an EAM, a historian or an asset register. The bottleneck has moved from "can it work" to "can we govern it". Expect procurement requests within the quarter. Asset managers should be in the room framing the control questions — integration boundaries, data residency, identity and audit — before an IT or digital team spins up a pilot that the AM function then has to clean up.
6. Adoption just crossed 50%
Gallup's Q1 2026 workforce survey, released last week, crossed a threshold for the first time: 50% of US workers now use AI at work. Daily use reached 13% of the workforce. A parallel Gallup poll found 57% of US college students use AI weekly, despite the fact that most campuses still formally restrict it.
Why it matters for asset owners. The unsanctioned half is already inside your reliability team, your planning team and your graduate intake. The question is no longer "should we allow it" — the answer has been taken out of your hands — but "how do we govern what is already happening". Three moves we are recommending: catalogue current use (where, by whom, on what data), credential approved tools (so the sanctioned path is easier than the shadow path), and contain the unapproved pathways (data loss prevention rules targeted at the specific tools you do not want touching asset records).
What to do this week
None of these are expensive. All three close gaps that widened noticeably in the last seven days.
SAS Asset Management provides advanced analytics, expert asset management services and maturity assessments to help asset owners realise their value.
Want a 30 minute conversation about AI governance in your asset base? Book a discovery call.

RCA in the AI Era, Without Shipping Your Failure Data to the Labs

.jpg)