95% PM Compliance. Still Failing. Here's Why.
Compliance measures whether PM tasks get done. It says nothing about whether they're the right tasks. Here's how to audit your PM programme for actual effectiveness, not just completion.

95% PM completion. Green across the board. And the plant still isn't getting more reliable.
That number looks great in a report. It does not mean your maintenance programme is working.
Compliance measures whether tasks get done. It says nothing about whether they're the right tasks, at the right frequency, for the right failure modes. A PM schedule that hasn't been reviewed in three years can hit perfect completion numbers right up until something fails that should have been caught.
That's not a maintenance problem. That's an audit problem.
The difference between compliance and effectiveness
Compliance is easy to measure. Did the task get completed? Yes or no. It's a binary output that fits neatly on a dashboard and satisfies an auditor.
Effectiveness is harder. It asks a different question: did completing this task actually reduce the likelihood of failure? That requires you to connect PM activity to failure history, which most maintenance teams don't do systematically.
What we see regularly in mining maintenance assessments is a PM schedule that's been inherited, not designed. Someone built it when the plant was commissioned or after a significant failure event. It was reasonable at the time. But the operating context has changed, the failure modes have shifted, and nobody has gone back to ask whether the tasks still make sense.
The result is organised waste. Work that gets done with discipline and consistency, producing no reliability improvement, consuming maintenance hours that could be applied elsewhere.
The five gaps that separate effective PM programmes from compliant ones
Across the assessments we've run, five gaps consistently explain the difference between a PM programme that actually works and one that just gets completed.
Task validity. Can every PM task in your schedule be traced to a specific failure mode it's designed to address? If a task can't be justified by a consequence worth avoiding, it probably shouldn't be there. The honest answer in most operations is that a significant proportion of tasks exist because they've always existed, not because they're still relevant.
Interval accuracy. Are your PM intervals based on failure data or on original equipment manufacturer recommendations? OEM recommendations are conservative defaults written to protect the manufacturer, not to optimise your maintenance spend. For assets with site-specific operating conditions, real failure data will almost always tell a different story.
Task execution quality. When a PM task says "inspect bearing," does the technician completing it know what they're looking for, what constitutes a pass or fail, and how to record what they found? Vague task instructions produce vague findings. You can't build a case for interval adjustment or task modification on data that says nothing.
Data capture and feedback loops. Is your PM finding data being used to improve the programme? Can you extract a report from your CMMS showing which tasks are generating the most corrective work? If the data flowing out of PM completion isn't being used to make decisions, you're running a compliance programme, not a reliability programme.
Governance and review cycles. Is there a scheduled review of the PM programme, or does it only get looked at after something fails? For critical assets, an annual review cycle is a minimum. For assets with dynamic operating conditions or changing failure modes, more frequent reviews are warranted.
What to do with the finding
The good news is that most of what's needed to close these gaps doesn't require new technology or significant capital. It requires structured attention.
Start with a sample. Pick your highest-criticality asset class and run through the five gaps honestly. Not the whole PM schedule, just one asset class. You'll find enough to work with.
The most common finding is task validity. A proportion of PM tasks will have no clear justification when you trace them back to failure modes. Some will be duplicates of other tasks. Some will address failure modes that have been eliminated by design changes. Removing or modifying these tasks doesn't reduce reliability, it frees up maintenance hours for work that actually matters.
The second most common finding is interval accuracy. Assets that haven't failed in years are often being over-maintained, and assets with recurring failures often have intervals that are too long. The CMMS has the data to make these adjustments. It just needs someone to look at it.
Run the audit
We've built a free PM Effectiveness Audit Checklist that walks through the five gaps with 25 specific questions across task validity, interval accuracy, execution quality, data capture, and governance. You can run it on a single asset class in an afternoon.
It's not a compliance checklist. It's a diagnostic. The questions are designed to surface the gaps that compliance metrics hide.
Download it, run it on your highest-criticality asset class, and see what comes up. If you can answer all 25 questions confidently, your PM programme is in good shape. If you can't, you've found your starting point.
The template is free. No consultant required to use it.
The bottom line
Compliance is a floor, not a ceiling. Hitting 95% PM completion means you're doing the work. It doesn't mean the work is right.
The question worth asking your team this week: when did we last validate that our PM tasks still make sense for the current failure modes and operating context?
If the honest answer is "we haven't," that's your starting point.
Shane Scriven is the Managing Director of SAS Asset Management. SAS-AM helps asset-intensive organisations measure what matters, model what's coming, and make the right call.

Anomaly Detection vs Classification: Which ML Approach When?

.jpg)