How QA Separates Useful Assessments from Expensive Box-Ticking
The quality assurance practices that turn assessment data into actionable improvement roadmaps - not just compliance paperwork.

When organisations commission a maturity assessment, they're making an implicit assumption: that the assessment will produce findings they can trust. But what makes an assessment trustworthy? How do you know the score reflects your actual capability rather than the assessor's interpretation or methodology quirks?
Quality assurance for maturity assessments rarely gets the attention it deserves. Most organisations focus on selecting a framework and finding an assessor, then assume the process will take care of itself. That assumption can be expensive.
Why assessment quality varies
Maturity assessment isn't a precise science. Different assessors can look at the same organisation and reach different conclusions. This variation isn't necessarily because anyone is incompetent - it stems from legitimate challenges in evaluating organisational capability.
Evidence interpretation. What does it mean when an organisation has a documented risk framework but limited evidence it gets used? Is that a documentation gap (easily fixed) or a cultural issue (hard to fix)? Assessors make judgment calls on these questions.
Sampling effects. No assessment reviews everything. Assessors select which documents to review, which people to interview, which processes to examine. Different samples yield different findings.
Stakeholder honesty. People being assessed may present their organisation favourably, consciously or unconsciously. Some assessors are better than others at getting beyond the official story to operational reality.
Framework application. Even standardised frameworks require interpretation. What exactly constitutes "systematic" risk-based decision-making? Where's the line between Level 2 and Level 3? Different assessors draw these lines differently.
Assessor expertise. Understanding asset management practices requires domain knowledge. An assessor who's never worked in your industry may miss context that would change their interpretation of evidence.
What good quality assurance looks like
Organisations can't eliminate assessment variation, but they can reduce it and increase confidence in findings.
Clear methodology documentation. Before the assessment starts, understand exactly how it will be conducted. What evidence types will be reviewed? Who will be interviewed? How will scores be determined? How will disputed interpretations be resolved? Vague answers here predict problems later.
Evidence tracing. Every finding should connect to specific evidence. Not "stakeholders indicated data quality is an issue" but "four of six interviewed asset managers stated they don't trust equipment age data in the central system and routinely verify through site visits." Traceable findings can be challenged and verified.
Calibration. Assessors who conduct many assessments across different organisations develop calibrated judgment. They've seen what Level 3 actually looks like across dozens of organisations, not just theoretically. Ask prospective assessors about their experience base.
Multiple perspectives. Single-assessor findings are more susceptible to individual bias than assessments involving multiple assessors who can challenge each other's interpretations. For significant assessments, having more than one assessor review key evidence improves reliability.
Draft review process. Before findings are finalised, the organisation should have opportunity to review draft findings and raise concerns. Not to negotiate better scores, but to identify where the assessor may have misunderstood evidence or context.
Assessment of the assessment. After the assessment, evaluate how the process worked. Were the right stakeholders involved? Did findings align with operational understanding? Where there were surprises, were they genuine insights or assessment errors?
Red flags in assessment practice
Some indicators suggest an assessment may not be trustworthy.
Scores without explanation. If the assessor can give you a maturity score but can't clearly explain what evidence led to that score, something is wrong. Either the assessment was superficial or the methodology isn't rigorous.
Generic findings. If the findings could apply to almost any organisation - "you need better integration between strategy and operations" - the assessment probably didn't go deep enough to provide real insight.
No surprises anywhere. If every finding confirms what you already believed, the assessment may not have looked hard enough. Genuinely rigorous assessment usually reveals something unexpected.
Rapid turnaround. Meaningful assessment takes time. If someone offers to assess your organisation in a week, either the scope is very narrow or the depth is superficial.
Assessor defensiveness. When you question findings, a good assessor engages with your concerns and explains their reasoning. An assessor who becomes defensive when challenged may not have strong evidence behind their conclusions.
Building internal QA capability
Over time, organisations should develop their own ability to evaluate assessment quality.
Understand the framework. Don't outsource framework understanding entirely to assessors. Develop internal expertise in whatever framework you're using so you can have informed conversations about how it's being applied.
Track assessment consistency. If you conduct assessments periodically, track how scores change. Are changes explained by real improvements (or declines), or do they reflect methodology variation? Inconsistent scores without clear causes suggest QA issues.
Compare internal and external views. Conduct internal self-assessments between external assessments. When external assessments come in higher or lower than internal views, understand why. The gap might indicate blind spots in either direction.
Participate actively. Assessment shouldn't be something done to the organisation. Internal staff should be engaged throughout, helping identify relevant evidence and stakeholders, and challenging preliminary findings where appropriate.
When to invest in assessment QA
Not every assessment needs extensive QA. A quick desktop review for internal purposes may not warrant significant QA investment. But certain situations justify more rigour.
Regulatory or certification context. When assessment findings will influence regulatory outcomes or certification decisions, QA matters more. Getting it wrong has consequences.
Significant investment decisions. If assessment findings will inform major capability investment, you need confidence the findings are accurate. An inflated maturity score could lead to under-investment; a deflated score could lead to wasted resources.
Organisational change. When assessment findings will drive significant organisational change - restructures, new systems, major process redesigns - QA protects against implementing changes that address assessment artifacts rather than real gaps.
Comparing across entities. If you're comparing maturity across business units, regions, or organisations, QA ensures you're comparing like with like. Methodology variation undermines comparative analysis.
Quality assurance for maturity assessments isn't about achieving false precision in something inherently judgmental. It's about understanding the limitations of assessment, reducing unnecessary variation, and building appropriate confidence in findings. Organisations that invest in assessment QA get more value from the assessments they commission - not because the scores are higher, but because they can trust them.
Quality in the Broader Assessment Context
Ensuring assessment quality connects to fundamental questions about what makes maturity evaluation valuable in the first place. Quality assurance mechanisms should address the common failure modes that reduce assessment value. For context on what can go wrong, our analysis of common pitfalls in maturity assessments explores the patterns that undermine useful findings.
The frameworks underpinning quality assessment practice draw from international standards bodies. The Asset Management Standards hub maintained by the Asset Management Council provides access to key reference documents including the GFMAM publications that inform assessment methodology. Understanding these foundational documents helps organisations evaluate whether assessment approaches align with established practice.
.jpg)