Q1 Retrospective: AI in Asset Management — What Actually Happened
Three questions dominated every conversation at Mainstream Summit Perth 2026. Here is what they reveal about where the industry actually stands on AI.

Three Questions That Defined Q1 2026
Mainstream Summit Perth 2026 brought together an extraordinary range of practitioners — engineers, asset managers, data professionals, maintenance leads, and executives from across the asset management industry. The breadth of expertise in the room was genuinely impressive. Yet across every conversation, hallway discussion, and post-session debrief, the same three questions surfaced. Every single time.
When and how do I use AI?
Where do I start with AI? and
Is it AI, ML, or just data science?
If you are tracking AI trends Q1 2026 in asset management, this convergence tells you something important: the industry is not short on curiosity. What it is short on is clarity.
Question One: When and How to Use AI
The honest answer is that most organisations are not ready to use AI — and that is not a failure. It is just a sequencing problem.
AI performs well when it has reliable, structured, historical data to learn from. Most asset management environments do not yet have that. Data is fragmented across systems, inconsistently labelled, and rarely cleaned. Asking AI to do meaningful work on top of that foundation is like asking a structural engineer to assess a building without access to the drawings.
The practical guide here is simple: before asking when to use AI, ask whether your data is in a state where any analysis — human or machine — would produce trustworthy results. If the answer is no, that is your starting point. AI comes after you solve the data problem, not before.
Question Two: Where Do I Start
Start with the problem, not the technology. This sounds obvious, but most organisations start with the tool
"We want to use AI"
and then go looking for a problem to attach it to. That approach wastes time and produces nothing of value.
A better entry point for data science in asset management: pick one failure mode that costs you the most money or safety exposure. Collect what you already know about it — failure history, inspection records, operating conditions. Analyse that data with basic statistical tools. Visualise it. Build a mental model of what is actually happening. That process alone will surface more actionable insight than most organisations have ever produced from their asset data.
Where to start with AI is, in almost every case, data science first.
Question Three: AI, ML, or Data Science
This was the question that surprised me most at Mainstream Summit Perth.
The depth of confusion here was worse than I expected, and this is not a criticism of the people asking — it is a symptom of how badly the industry press has muddied these definitions.
Here is the practical distinction:
- Data science is the broad discipline of extracting insight from data using statistics, visualisation, and structured analysis. It does not require machine learning or AI. Most asset managers need this first.
- Machine learning (ML) is a subset of data science where algorithms learn patterns from data to make predictions or classifications. It is appropriate when you have enough labelled historical data and a clearly defined prediction target — for example, predicting remaining useful life or classifying failure modes from sensor data.
- Artificial intelligence (AI) in the current context usually refers to large language models and generative systems. These are useful for specific applications — summarising maintenance records, generating inspection reports, assisting with knowledge capture — but they are not the starting point for most asset management analytics problems.
The sequence for most organisations is: data science first, ML when the data warrants it, AI where there is a genuine fit.
Skipping the first two steps and jumping straight to AI is where most implementations fail.
The Unused Data Problem
During my session on edge federated machine learning at Mainstream Summit Perth, I asked a simple question:
How many of you have sensor or operational data that has been collected but never analysed?
Every hand in the room went up.
This is one of the most significant untapped opportunities in the asset management industry right now. Organisations are sitting on years of condition monitoring data, SCADA records, inspection histories, and maintenance logs that have never been interrogated in any structured way. The data exists. The analytical capability to work with it exists. What is missing is the deliberate process of connecting the two.
This is not a technology problem. It is a prioritisation problem. The organisations that will move fastest on AI and ML in asset management are the ones that treat their existing data as a strategic asset and invest in understanding what it is already telling them.
An Honest Q1 Assessment
More talk than action. That is the honest summary of where the asset management industry sits on AI trends in Q1 2026.
Interest is high. Conference sessions are well attended. The questions are thoughtful and the people asking them are serious practitioners. But when you ask organisations what they have actually built, the numbers are thin. Proof of concepts that stalled. Vendor pilots that produced a dashboard nobody uses. Data projects that ran into governance issues and were quietly shelved.
This is not surprising. Meaningful capability takes time to build. But the gap between enthusiasm and execution is wide, and most organisations have not yet identified what is specifically blocking them from moving from exploration to implementation.
What Q2 Is About: Reliability Engineering as the On-Ramp
Our Q2 content is going to focus on a theme that came through clearly at Mainstream Summit Perth: reliability engineering fundamentals are the on-ramp to AI and ML in asset management.
Reliability centred maintenance, failure mode and effects analysis, and structured failure mode analysis do something that is often overlooked in the AI conversation — they force you to define what you are trying to predict before you try to predict it. They create the labelled failure taxonomy that machine learning needs to function. They identify the high value targets that make data science worthwhile.
Before you train a model, you need to know which failure modes matter most, what the precursors to those failures look like, and what data you would need to detect them early. RCM and FMEA answer those questions. They prime the pump. Without that foundation, AI projects in asset management tend to produce technically interesting outputs that nobody acts on.
If you want to build genuine analytical capability in your organisation, the path runs through reliability engineering first.
What to Do Next
If you are an asset manager trying to find your footing on AI and data science, here are three concrete steps that will move you further than any AI platform procurement:
- Audit what data you are already collecting but not analysing. The unused data problem is real and it is almost certainly affecting your organisation.
- Run a structured failure mode analysis on your highest consequence asset class. This will define your AI target before you invest in any tooling.
- Build or commission a basic data science assessment of your maintenance history. Before machine learning, before AI — find out what your existing data is telling you.
We will be publishing practical content across Q2 on each of these areas. If you want to work through this with a practitioner who has done it across multiple sectors and asset classes, reach out directly.
.jpg)