What a Maintenance Maturity Assessment Actually Reveals
Beyond the score: how a maintenance maturity assessment exposes the gaps that matter across 45 subjects, from work management to reliability engineering.

Part 1 of 4: Maintenance Maturity Assessment Series
Here's an uncomfortable question: if someone asked you to describe your organisation's maintenance maturity, could you give an answer more specific than "we're okay" or "we need to improve"?
Most can't. And that's the problem. Without a structured framework to assess against, maintenance maturity is just a feeling. Feelings don't drive improvement plans, secure budgets, or convince senior leadership that investment in maintenance capability will deliver measurable returns.
A proper maintenance maturity assessment doesn't give you a single score. It gives you a capability profile across 45 distinct subjects, spanning everything from how you select maintenance tactics to how you record failure data. That level of granularity is what separates useful assessments from vague benchmarking exercises.
What Gets Assessed: The GFMAM Maintenance Framework
The Global Forum on Maintenance and Asset Management (GFMAM) published the Maintenance Framework, now in its Second Edition, as a structured reference for the discipline of maintenance and its management. It organises maintenance into nine subject groups, each covering a distinct domain of maintenance practice.
Those nine groups are: Business Requirements and Organisational Context, Asset Creation and Acquisition, Maintenance Tactics and Task Types, Asset Maintenance Strategy Development, Human and Material Resource Management, Maintenance Work Management, Asset Performance and Condition Management, Maintenance Data and Information Management, and Maintenance Program Management.
Within those nine groups sit 45 individual subjects. Each subject represents a specific capability area that can be independently assessed. This matters because organisations rarely perform uniformly across all areas. You might have excellent condition monitoring programs (Group 7) but poor work closeout practices (Group 6). A single maturity "score" would mask that gap entirely.
What the Maturity Levels Actually Mean
Each subject is assessed against a six-level maturity scale, from Level 0 (Innocent) through to Level 5 (Excellent). The levels aren't arbitrary. Each has defined characteristics that describe what capability looks like in practice at that stage.
Take work scheduling (Subject 6.3) as an example. At Level 1 (Aware), the organisation recognises that scheduling maintenance work matters, but scheduling is largely reactive and ad hoc. At Level 3 (Competent), weekly and daily schedules are developed collaboratively with operations, resource loading is managed, and schedule compliance is measured. At Level 5 (Excellent), scheduling is dynamically optimised using real-time data, continuously benchmarked, and integrated with broader organisational planning.
The gap between Level 1 and Level 3 isn't just about doing more. It's about fundamentally different ways of working. That's why a maturity assessment is revealing: it shows you not just where you are, but what "better" actually looks like in concrete, operational terms.
Or consider condition-based maintenance (Subject 3.2). At Level 2 (Developing), an organisation has started applying condition monitoring to some assets, but coverage is inconsistent and technology selection isn't linked to failure mode analysis. At Level 4 (Optimising), condition monitoring coverage is comprehensive, alarm setpoints are calibrated against asset criticality, and findings are systematically integrated into work management processes. The difference is between having some vibration probes installed and having a condition monitoring program that actually drives maintenance decisions.
Evidence, Not Opinions
Here's the thing that surprises many organisations going through their first assessment: maturity ratings aren't based on what people say they do. They're based on what they can demonstrate.
The GFMAM framework maps specific artefacts to each subject group. For Asset Maintenance Strategy Development (Group 4), those artefacts include criticality analysis outputs, FMEA worksheets, individual asset maintenance plans, maintenance procedures, and recommended spares lists. For Maintenance Data and Information Management (Group 8), they include data governance policies, asset master data standards, failure coding structures, and CMMS configuration documentation.
An assessment examines whether these artefacts exist, whether they're current, whether they're actually used, and whether they connect logically to each other. A maintenance plan (Group 4) that doesn't reference the criticality analysis (also Group 4) is a red flag. An asset register (Group 8) that doesn't align with the CMMS hierarchy is another. These cross-references are where a maturity assessment reveals the real state of play.
The framework also maps relevant standards to each subject group. These include ISO 55001 for the management system, SAE JA1011 for RCM processes, IEC 60812 for FMEA methodology, ISO 14224 for failure data collection, API 580 and 581 for risk-based inspection, and EN 13306 for maintenance terminology. These standards provide the benchmarks that prevent assessment from becoming subjective opinion.
What a Capability Profile Reveals
In practice, a completed assessment produces a capability profile that maps maturity levels across all 45 subjects. This profile typically reveals patterns that aren't visible from inside the organisation.
Common patterns include strong strategy development (Groups 1 and 4) but weak execution and data capture (Groups 6 and 8). This suggests good intentions that aren't translating to the workshop floor. Another common pattern is strong technical capability in condition monitoring (Group 7) but limited integration with work management (Group 6) and data systems (Group 8), meaning condition findings aren't consistently driving maintenance actions.
The profile also reveals where investment will have the most impact. Improving data quality (Group 8) typically has a multiplier effect across several other groups because reliable data underpins performance measurement (Group 7), work management (Group 6), and strategy development (Group 4).
Key Takeaways
A maturity assessment isn't a report card. It's a diagnostic tool that shows you specifically where your maintenance capability sits, what's holding it back, and where focused improvement will deliver the greatest return. The GFMAM framework's 45-subject structure means you get actionable specificity rather than vague generalisations. Maturity level descriptors give you concrete targets to work toward, not abstract aspirations. And evidence-based assessment ensures the findings reflect reality, not optimism.
Next in this series → How QA Separates Useful Assessments from Expensive Box-Ticking
Want to understand where your organisation sits? Book a diagnostic assessment or download our Maintenance Management Maturity Framework overview.
.jpg)