What a Maturity Assessment Actually Reveals About Your Asset Management
Beyond the score: how a proper maturity assessment exposes the gaps that matter and the strengths you didn't know you had.

Let's start with an uncomfortable truth: most asset management maturity assessments tell organisations things they already know. "Your data quality needs work." "Strategy and operations aren't well connected." "There's room for improvement in lifecycle planning."
These findings aren't wrong. They're just not particularly useful. What separates a valuable maturity assessment from an expensive confirmation of the obvious isn't the assessment framework - it's what the assessment reveals about why things are the way they are and what to do about it.
The assessment trap
Here's how maturity assessments typically go wrong.
An organisation decides it needs an assessment - often because a regulator asked, a new executive wants a baseline, or someone saw a compelling conference presentation. A framework gets selected (ISO 55001, GFMAM's 39 subjects, something proprietary). Interviews happen. Documents get reviewed. A score emerges.
And then... not much changes.
The assessment sits in a drawer (or more likely, a SharePoint folder). People remember their score vaguely. Maybe a few quick wins get implemented. But the fundamental capability gaps remain because the assessment identified what was weak without illuminating why or providing a credible path forward.
What useful assessments reveal
A maturity assessment that actually drives improvement does something different. It connects the dots between symptoms and causes, between current state and achievable next steps.
Root causes, not just symptoms. "Your criticality data is incomplete" is a symptom. "Criticality assessments aren't required before capital submissions, so there's no business driver to complete them" is a root cause. The second insight leads somewhere. The first just describes reality.
Decision-making patterns, not just documentation gaps. The interesting question isn't whether your asset management plan exists - it's whether anyone actually uses it to make decisions. Assessments that explore how decisions really get made (versus how policies say they should be made) reveal the actual capability level.
Interdependencies that create systemic challenges. Poor data quality causes poor decisions. Poor decisions reduce trust in analysis. Reduced trust means people bypass systems. Bypassing systems means data quality degrades further. A useful assessment maps these reinforcing loops rather than treating each issue in isolation.
Credible improvement pathways. Saying "you need better lifecycle planning" is easy. Explaining how an organisation with your current capabilities, constraints, and culture could realistically develop better lifecycle planning - that requires actual insight.
The questions that matter
If you're planning a maturity assessment, consider what questions you actually need answered.
Where are we genuinely capable versus where do we just have documentation? There's a difference between having a risk framework and actually managing risk. Maturity isn't about paperwork.
What's holding us back? Is it skills? Tools? Governance? Culture? Resources? Different constraints require different responses. "Do more asset management" isn't a strategy.
What would it take to get meaningfully better? Not perfect - better. What's the realistic next level, and what would achieving it require?
Where would improvement matter most? Organisations have finite improvement capacity. Which capability gaps, if addressed, would have the biggest impact on outcomes that matter?
How do we compare to peers? Not to create league tables, but to understand whether our challenges are common (suggesting they're hard) or uncommon (suggesting we're missing something others have figured out).
What makes assessments actionable
Several characteristics distinguish assessments that drive change from those that just produce reports.
Executive engagement. If senior leaders see the assessment as a compliance exercise delegated to the asset management team, findings will struggle to get traction. The assessment process itself should build leadership understanding and commitment.
Cross-functional involvement. Asset management capability depends on how multiple functions work together. Assessments that only involve the asset management team miss the integration challenges that often matter most.
Evidence-based findings. Claims should be substantiated with specific examples. "People don't trust the data" is an opinion. "In three of four recent capital decisions, analysts requested raw data from sites rather than using the central system because they believed central records were outdated" is evidence.
Prioritised recommendations. A list of 50 improvement opportunities isn't useful. A clear view of which three to five things should happen first, why, and how they connect - that's useful.
Follow-through mechanism. The assessment should establish how recommendations will be tracked and how progress will be measured. Without this, good intentions dissipate.
Choosing assessment depth
Maturity assessments range from lightweight desktop reviews to comprehensive deep-dives. The right depth depends on what you need.
Desktop assessments (document review, limited interviews) work for establishing a baseline when you've never been assessed, checking progress against a previous assessment, or satisfying a regulatory requirement with minimal disruption.
Standard assessments (broader interviews, some evidence verification) work for identifying priority improvement areas, supporting a business case for capability investment, or preparing for certification.
Comprehensive assessments (extensive stakeholder engagement, detailed evidence review, cross-functional workshops) work for organisations serious about transformation, those facing significant performance challenges, or those wanting to understand how practices compare to leading organisations.
More depth costs more and takes longer, but also reveals more. The question is whether that additional insight is valuable enough to justify the investment.
After the assessment
The assessment report isn't the deliverable - improved capability is the deliverable. The report is just an input.
Effective organisations treat assessment findings as the beginning of a conversation, not the end. They pressure-test findings with operational teams. They refine priorities based on what's achievable. They build improvement plans that account for competing priorities and resource constraints.
And they reassess periodically - not to generate new reports, but to validate that improvements are actually taking hold and to identify where the next focus should be.
A maturity assessment done well provides clarity that accelerates improvement. Done poorly, it produces expensive paperwork that changes nothing. The difference lies in approaching assessment as a means to an end - better asset management outcomes - rather than an end in itself.
Understanding Maturity in Practice
Getting value from maturity assessments requires understanding what maturity models actually measure and why that matters for your organisation. The structure of maturity scales, and how to interpret where you sit on them, determines whether assessment findings translate into meaningful action. For a comprehensive introduction to these concepts, our guide to understanding asset management maturity models covers the foundational frameworks.
The standards landscape that underpins maturity assessment continues to evolve. ISO 55001:2024 brought updates that affect how organisations should think about conformance and capability. The ISO TC251 committee responsible for asset management standards provides authoritative context on these frameworks and their development - useful background for organisations aligning their assessments with international best practice.
.jpg)