Poll Results: Skills Top the List as the Biggest Barrier to AI in Asset Management

Skills topped our LinkedIn poll as the #1 barrier to AI adoption in asset management at 50%, ahead of data quality and leadership buy in. Here is what the results mean and three practical steps to close the gap.

Poll Results: Skills Top the List as the Biggest Barrier to AI in Asset Management

We asked. You answered. And the results paint a picture that should shape how organisations approach AI adoption in 2026.

Last week we ran a simple poll on LinkedIn: "What's holding back AI in your organisation?" Four options, one clear winner.

Skills took out the top spot at 50%, with Data Quality and Leadership Buy In tied at 25% each. Budget Constraints? Zero votes.

Let's unpack what that means and, more importantly, what to do about it.

Poll Results: AI Adoption Barriers

What's Holding Back AI in Your Organisation?

LinkedIn poll results, February 2026

The Skills Gap Is Real, But It's Not What You Think

When people talk about an AI skills gap, the assumption is usually about hiring. More data scientists, more machine learning engineers, more Python developers. And yes, technical talent matters.

But the more critical gap sits in the middle: asset managers, reliability engineers, and maintenance planners who understand their domain deeply but don't yet know how to frame problems for AI, evaluate model outputs, or integrate AI recommendations into existing decision-making processes.

This isn't about turning every asset manager into a coder. It's about building enough AI literacy that teams can collaborate effectively with technical specialists and, critically, know when an AI output doesn't pass the smell test.

What we've found is that the organisations making real progress aren't the ones with the biggest data science teams. They're the ones where domain experts and data specialists actually talk to each other with a shared vocabulary.

Budget Got Zero Votes. That's Significant.

The fact that budget constraints received no votes at all is worth pausing on. It suggests that for most organisations, the appetite to invest in AI is there. The money is available, or at least accessible. What's missing is the confidence that the investment will land well.

That confidence comes from capability. When teams don't have the skills to scope, implement, and sustain AI initiatives, every dollar spent carries more risk. So in a way, the skills gap and the budget question are connected. Organisations aren't budget-constrained, they're confidence-constrained.

Data Quality: The Slow Burn

At 25%, data quality is clearly still a concern, and rightly so. No amount of algorithmic sophistication compensates for inconsistent asset registers, missing maintenance histories, or condition data scattered across disconnected systems.

The honest answer is that data quality is rarely a problem you solve before starting AI work. It's a problem you solve through AI work. The process of preparing data for a specific use case, whether that's predicting pump failures or optimising inspection schedules, forces exactly the kind of data clean-up that generic "data quality programmes" struggle to achieve.

Pro tip: pick a bounded, well-defined use case and let the AI project itself drive data improvement. You'll make more progress in three months than a standalone data governance initiative makes in a year.

Leadership Buy In: The Enabler

Also at 25%, leadership buy in remains a factor. In practice, this often shows up not as outright resistance but as passive hesitation. Leaders who say the right things about innovation but don't allocate time, remove blockers, or accept that early AI efforts will be messy.

The good news is that leadership buy in tends to follow results. A single well-executed pilot that delivers measurable value, even modest value, shifts the conversation faster than any strategy deck. The challenge is getting that first win without full leadership support, which circles back to having skilled people who can deliver with limited resources.

What To Do About It: Three Practical Steps

If skills are the primary barrier in your organisation, here's where to start.

First, invest in AI literacy across your asset management team. This doesn't mean sending everyone to a machine learning bootcamp. It means structured exposure to what AI can do in an asset management context: real examples, real limitations, real outcomes. A half-day workshop with your own data is worth more than a week of generic training.

Second, create cross-functional pairs. Match reliability engineers or maintenance planners with data analysts on a specific problem. The domain expert brings context and judgment. The analyst brings tools and technique. Neither is effective alone. Together, they produce results that are both technically sound and operationally relevant.

Third, learn by doing. Pick a real problem, not a sandbox exercise, and work through it end to end. Whether that's building a simple failure prediction model or automating condition report analysis, applied learning builds capability far faster than theoretical training.

What's Coming Next

The skills question clearly struck a nerve, so we're dedicating focused content to it over the coming weeks. Expect a practical framework for building AI capability in asset management teams, covering everything from role-specific literacy pathways to structuring your first AI pilot for maximum learning.

In the meantime, if you're grappling with any of these barriers, our AI Readiness Assessment is a good place to start. It maps where your organisation sits across data, skills, governance, and technology, and identifies the specific gaps worth tackling first.

No items found.