Stop Ranking Your Assets. Start Understanding Them.
Criticality scoring is broken. Shane Scriven argues the better question is not which asset ranks highest, but how each asset is critical and to whom.

Most asset management teams spend enormous effort figuring out which assets are most critical. They build elaborate scoring systems, argue about weightings, and produce a number for every asset in the register. Then they do something with that number — or, more often, they don't. It's time to ask whether the whole exercise is pointed in the wrong direction.
The current state of criticality practice
Criticality assessment is well established in asset management. The ISO 55000 series, the Asset Management Landscape, and virtually every maintenance strategy framework recognise that not all assets deserve equal attention. The intent is sound: focus your limited resources where failure matters most.
In practice, organisations have built sophisticated machinery to answer this question. Weighted scoring matrices. Multi-factor models combining safety consequences, production impact, environmental exposure, maintenance cost, and regulatory obligation. Sometimes those weights are calibrated carefully. More often, they reflect whoever had the most influence in the room when the methodology was designed.
The output is a single number per asset — a criticality score — and a ranked list. Assets in the top tier get more attention. Everything else waits its turn.
This approach is well intentioned. But it has some serious problems.
The problem with ranking
The first problem is that criticality scores become political. Once people understand that a higher score means more maintenance budget, more spare parts, more management attention — they start gaming the inputs. Consequence ratings drift upward. Safety risks get overstated. Weightings get quietly adjusted. The methodology that was supposed to be objective becomes a negotiation.
The second problem is clustering. When you run enough assets through a scoring system, the scores converge toward the middle. You end up with a large band of assets all sitting at roughly the same number, which tells you almost nothing about how to differentiate your treatment of them. The distribution that was supposed to help you prioritise has instead created a fog.
The third problem is the loss of context. A criticality score is an abstraction. Once you strip an asset down to a number, you lose sight of what that asset is actually critical to. You can't tell from the score whether this pump is critical because it protects a worker's safety, because it keeps a production line running, or because a regulator requires it to be operational at all times. That distinction matters enormously when you're deciding how to manage failure risk.
And then there's the fundamental credibility problem. As Shane Scriven, Managing Director of SAS Asset Management, puts it:
"I've never met a criticality assessment or asset hierarchy that everyone agreed upon."
That's not a knock on the people doing the work. It's a reflection of the method. When the output of a complex scoring exercise is a single ranked list, disagreement is almost guaranteed — because people are comparing assets that serve entirely different purposes, in different operating contexts, for different groups of people. There's no objective ground to stand on.
The better question
The problem isn't that organisations are thinking about criticality. The problem is the question they're asking.
"Which asset is most critical?" is a ranking question. It assumes criticality is a property of the asset itself — something inherent that can be measured and compared across the register.
A more useful question is: "How is this asset critical?"
This shifts the lens from ranking to understanding. Instead of producing a score, you're building a picture. You're asking: "who relies on this asset, and what breaks when it fails?"
Look at criticality through the eyes of each group that depends on the asset. A pump in a water treatment plant might be critical to process safety for the operations team, critical to regulatory compliance for the environmental manager, and critical to service continuity for the customer experience team. Each dimension is real. Each points toward a different management response.
Two assets of the same type — say, two identical pumps — can have completely different criticality profiles depending on where they sit in the system. One might be critical to a single dimension. The other might touch four. Neither is more or less critical in absolute terms. They're critical in different ways, to different people, with different consequences.
This isn't a theoretical distinction. It changes what you do. An asset that is critical only to production output might be managed primarily through condition monitoring and planned replacement. An asset that is critical to worker safety requires a different response entirely, regardless of whether its score sits above or below the one next to it on the ranked list.
What this looks like in practice
The practical shift is straightforward. For each asset, document two things:
- Who relies on it. Map the asset against the dimensions it serves — safety, production, environmental, regulatory, customer experience. Be specific. "Safety" is not enough; name the hazard scenario. "Production" is not enough; name the process it enables or protects.
- What breaks when it fails. For each group and each dimension, describe the consequence of failure. Not a score — a description. What actually happens? How quickly? To whom?
This documentation doesn't need to be elaborate. It can sit alongside the asset record in your asset management system. What it gives you is context that survives personnel change, budget cycles, and maintenance strategy reviews. The next engineer who picks up the file understands not just that this asset is important, but why it's important and for whom.
It also reframes the conversation when maintenance budgets are squeezed. Instead of defending a score, you're defending an impact on real people. That's a conversation asset managers can have with confidence — and one that decision makers can actually engage with.
One more thing worth naming: most assets are critical to something. Organisations don't have spare assets sitting idle as backups for the unimportant ones. If an asset exists in your register, something in your operation depends on it. The question is never really "is this critical?" — it's "critical to what, and what do we owe the people who depend on it?"
Where to start
If you're reviewing your criticality methodology — or finding that your current approach has gone stale and nobody trusts the outputs — start here:
- Pick a sample of ten assets across different parts of your register.
- For each one, spend thirty minutes with the people who depend on it: the operator, the safety team, the compliance manager.
- Ask them: what relies on this asset, and what happens when it's not available?
- Write that down. Don't score it yet.
You'll quickly find that the picture you build is richer, more defensible, and more useful than any ranked list you've produced before. You'll also find that the conversations are easier — because you're talking about real consequences, not abstract weightings.
From there, you can build a methodology that captures this dimensional view systematically. But the starting point is a shift in question, not a new spreadsheet.
The point
Criticality ranking has dominated asset management practice for decades. The intent was always to focus attention and resources where they matter most. That intent is right. The method deserves a rethink.
Stop asking which asset ranks highest. Start asking how each asset is critical, to whom, and what that means for how you manage it. The answers will serve you — and the people who depend on your assets — far better than any score.
If you're rethinking how your organisation approaches criticality, or if the current methodology has lost the confidence of the people who use it, we'd welcome the conversation.

Stop Ranking Your Assets. Start Understanding Them.

.jpg)