How QA Separates Useful Assessments from Expensive Box-Ticking

What separates a useful maintenance maturity assessment from an expensive box-ticking exercise? The answer lies in evidence, cross-referencing, and calibrated scoring.

How QA Separates Useful Assessments from Expensive Box-Ticking

Part 2 of 4: Maintenance Maturity Assessment Series

You've commissioned a maturity assessment. The assessor spent two weeks on site, interviewed 30 people, and delivered a 60-page report with colour-coded spider diagrams. Everyone nods at the findings. The report goes into SharePoint. Nothing changes.

Sound familiar? The issue often isn't the assessment itself. It's the quality assurance behind it. A well-structured assessment framework like the GFMAM Maintenance Framework provides the architecture, but without rigorous QA, the results can be misleading, inconsistent, or simply too vague to act on.

Quality assurance in maturity assessment isn't about adding bureaucracy. It's about ensuring the findings are trustworthy enough to drive investment decisions, improvement plans, and organisational change.

The Evidence Problem

The most common QA failure in maturity assessments is relying on interviews without verifying evidence. People generally describe their organisation more favourably than reality warrants. This isn't dishonesty. It's human nature. The maintenance planner who explains how work packages are developed is describing the process as designed, not necessarily as executed.

A quality assessment addresses this by verifying artefacts against each subject in the framework. For Subject 4.2 (FMEA/FMECA), the assessor doesn't just ask "do you do FMEA?" They examine the actual worksheets. Are they completed to a standard consistent with IEC 60812? Do they cover the asset's dominant failure modes? Were they developed by people with operational knowledge of the equipment? Are the recommended tasks traceable through to the maintenance plan?

For Subject 6.5 (Work Closeout and History Recording), the assessor reviews actual completed work orders. Are failure codes populated? Is the coding structure consistent with ISO 14224 principles? Does the level of detail support future analysis? Are material and labour costs recorded accurately?

This artefact verification is what separates an assessment that produces actionable findings from one that produces a comfortable but unreliable picture.

Cross-Referencing: Where Inconsistencies Surface

The GFMAM framework's 45 subjects don't exist in isolation. They connect, and a quality assessment tests those connections.

Asset Criticality Analysis (Subject 4.1) should directly inform the scope and depth of FMEA work (Subject 4.2). If the criticality assessment identifies a pump as high-consequence, but the FMEA for that pump is generic or absent, the assessment should flag the gap. Similarly, the maintenance plans developed under Subject 4.4 should reflect the outputs of the criticality and FMEA work. If they don't, the maturity rating for Subject 4.4 needs to account for that disconnect.

The same logic applies across subject groups. Condition monitoring programs (Subject 7.1 in Group 7) should be informed by the failure modes identified in strategy development (Group 4). The condition data collected should flow into the CMMS (Group 8) and trigger work orders through work management processes (Group 6). If the condition monitoring team operates in isolation, with their own systems and their own priorities, the organisation might score well on condition monitoring capability but poorly on integration.

A quality assessor maps these connections explicitly. They look for logical flow between subjects, not just isolated competence within each one. This cross-referencing is where the most valuable findings typically emerge, because it reveals systemic issues that subject-by-subject assessment alone would miss.

Calibrated Scoring: What Level 3 Actually Means

The maturity scale from Level 0 (Innocent) to Level 5 (Excellent) provides a structured rating system, but its value depends entirely on consistent calibration. Without QA, the same organisation could be rated Level 2 by one assessor and Level 4 by another.

Proper QA addresses this through defined maturity level descriptors for each subject. Level 3 (Competent) in Spare Parts Management (Subject 5.4) means something specific: the organisation has bills of materials linked to assets, stocking strategies based on criticality and lead time, inventory management processes in place, and regular review of stock levels. It doesn't mean "spare parts are generally available."

The descriptors also prevent a common trap: averaging across criteria within a subject. If an organisation has excellent inventory management processes but no bills of materials linked to assets, it hasn't achieved Level 3 in spare parts management. QA ensures assessors apply the maturity scale as a threshold, not a sliding scale.

Calibration extends to the standards mapped to each subject. An assessor evaluating RCM processes (Subject 4.3) against SAE JA1011 should apply consistent evaluation criteria. An assessor reviewing risk-based inspection (Subject 4.5) should reference API 580 and 581 consistently. These standards aren't optional extras. They're the benchmarks that make maturity ratings comparable across assessments, assessors, and organisations.

What Good QA Looks Like in Practice

In practice, assessment QA includes several mechanisms working together. Peer review of assessment findings by a second qualified assessor catches individual bias and interpretation gaps. Moderation sessions, where assessors discuss borderline ratings against the maturity descriptors, improve consistency. Evidence logs that document which artefacts were reviewed for each subject create an audit trail. And calibration workshops before the assessment begins ensure all assessors share a common understanding of what each maturity level looks like.

Worth noting: QA also means being transparent about limitations. If certain subjects couldn't be fully assessed due to time constraints or access limitations, a quality assessment says so explicitly rather than extrapolating from incomplete evidence. Partial data acknowledged is more valuable than complete data fabricated.

Key Takeaways

Assessment quality comes down to three things: verifying artefacts rather than accepting claims, cross-referencing between related subjects to expose systemic gaps, and applying calibrated scoring against defined maturity descriptors. Without these, you get a report. With them, you get a reliable foundation for improvement investment. The difference isn't cost or complexity. It's rigour.

← Previous: What a Maintenance Maturity Assessment Actually Reveals

Next in this series → IAM Endorsed Assessors: What the Credential Means for Your Assessment

Want to ensure your next assessment delivers findings you can trust? Talk to us about our QA-assured assessment process.

No items found.