The AI Strategy No One Can Measure (Yet)
Accountability, Signals and the Confidence Gap.
March, 2026
Executive Brief
Many organisations say they are investing in AI strategy, yet struggle to explain how success will actually be measured. The challenge isn’t simply immature metrics. AI is changing decision-making, capability, and organisational behaviour in ways traditional strategy frameworks struggle to capture. This piece explores why many executives feel progress is real, yet difficult to describe using conventional measurement language.
Ripple Insight
AI strategy often feels successful before organisations have a language to measure what is actually changing.
What’s really going on
Over the past year, I’ve observed a familiar moment play out across different organisations and leadership conversations. Someone shares progress on AI pilots, adoption, capability building – and the discussion feels positive right up until one question lands:
“So… how do we know it’s working?”
The energy shifts. Leaders sense there’s value emerging, yet struggle to describe it in a way that feels defensible. The conversation often drifts toward what to measure next quarter, rather than what the organisation has actually learned so far.
This isn’t because executives don’t believe AI matters. In many of the organisations I’m seeing, the opposite is true. Budgets exist. Tools are in use. Expectations are rising. The discomfort comes from a quieter tension: many organisations don’t yet have a language that fits the kind of change AI is creating.
Traditional measurement frameworks now feel blunt for many of the executives I’m speaking with. Benefits show up unevenly – as faster decisions here, fewer escalations there, subtle risk reduction somewhere else. In one recent discussion with a CFO in a large, complex organisation, it was put plainly: many of AI’s benefits are tangible in practice but hard to isolate in reporting. Better forecasts, faster decisions, stronger customer engagement – undeniably valuable, but difficult to quantify cleanly.
Accountability is spread across digital, IT, product, data, risk and operations. Everyone is involved, but ownership remains fuzzy. In board discussions, directors often admit they can’t confidently say where AI is already operating across the organisation, let alone how its impact is being assessed. Industry surveys now point to the same pattern: lots of AI activity, much less measurable impact.
What I hear privately from CFOs, CIOs and board members isn’t resistance. It’s unease. A sense that something important is happening – but that we’re struggling to explain it clearly and confidently enough to sustain momentum.
This is where AI strategy can quietly lose its footing. Not because the technology fails, but because the organisation can’t convincingly articulate what success looks like, who owns it, or how confident it truly feels in the outcomes.
How we’ll know it’s working
When organisations try to resolve this tension, they tend to fall into one of two camps.
The first is to double down on classic ROI logic: cost savings, headcount reduction, margin uplift. Where AI directly automates work, this can be effective. But many AI initiatives don’t replace labour so much as reshape it. They reduce friction, compress cycles, surface insight earlier, or help people make better calls under pressure. Those effects matter commercially, but they don’t always land cleanly in a benefits realisation model.
The second camp moves in the opposite direction. Measurement becomes softer, more narrative-driven. Teams talk about being “more productive” or “feeling faster,” but without shared signals, confidence erodes. Executives sense progress but struggle to defend it when budgets tighten or scrutiny increases.
The organisations doing more interesting work are taking a different approach. Rather than forcing AI into existing metrics, they’re paying attention to behavioural signals.
How long does it take for insight to turn into action?
How often are decisions reversed?
Where do people trust AI-assisted recommendations?
And where do they instinctively override them?
These questions show up in very practical moments: fewer rework loops, earlier escalation of edge cases, more consistent decisions across teams.
In other words, AI value often appears first as a shift in how the organisation behaves – long before it crystallises into neat financial outcomes.
Some emerging measurement frameworks are starting to reflect this. Instead of focusing solely on cost take-out, they look at decision cycle time – how quickly intelligence flows from signal to choice and how much human friction sits in between. It’s not a perfect metric, but it captures something traditional dashboards miss: whether AI is actually changing how decisions are made, not just how work is executed.
This doesn’t mean abandoning rigour. It means recognising that, for a period, rigour looks different. The risk many organisations face right now isn’t AI failure so much as measurement lag – applying yesterday’s scorecards to tomorrow’s work.
When it didn’t land
By now, pilot fatigue is a familiar term. Most leaders have lived through it: impressive proofs of concept, polished demos, and enthusiastic early adopters, followed by a quiet stall. It’s not just anecdotal; multiple studies now suggest that most AI pilots struggle to scale into production or deliver meaningful business value, even when the underlying technology works.
What’s often missed is why this keeps happening.
In many cases, the pilot didn’t fail technically. It failed narratively. No one ever defined what “working” meant beyond viability. Teams celebrated the prototype, but couldn’t explain what decision, cost, risk or experience had actually changed as a result.
Sometimes pilots never move into production because success was framed too vaguely to justify further investment. In other cases, the organisation simply moves on to the next experiment, shelving the previous one without extracting the learning. Over time, this creates a strange paradox: lots of AI activity paired with growing executive scepticism.
There’s also a human layer to these misses that doesn’t always get surfaced. People are already using AI extensively, even when official programs lag behind. When sanctioned initiatives feel slow, clumsy or overly constrained, work simply routes around them.
In one large insurance organisation, a sanctioned GenAI pilot struggled to prove impact in production, while teams quietly relied on personal AI tools to speed up claims processing. Leadership could sense the productivity lift but had no safe way to acknowledge, measure or scale it.
The failure here isn’t experimentation. It’s the absence of a shared language for confidence, ownership and learning.
The quiet pattern underneath
What sits beneath many of these stories is a deeper shift in how AI is actually entering organisations.
This is often described as Shadow AI, but it’s not just a replay of Shadow IT. Unlike SaaS sprawl, AI systems learn, adapt and persist. Outputs influence future inputs. Decisions leave traces. The productivity gains are real – but so are the governance, quality and accountability risks.
Recent enterprise surveys suggest this unsanctioned use is now the norm rather than the exception.
The problem isn’t simply that AI is appearing outside formal programs. It’s that organisations rarely have a way to acknowledge or measure what’s happening once it does.
Unofficial tools improve productivity, but the learning stays local. Leadership senses the gains but can’t quantify them. Risk teams worry about exposure but struggle to see the full picture. Meanwhile dashboards continue reporting pilot progress that may no longer reflect where the real work is happening.
Over time, this creates a peculiar form of organisational theatre: activity looks high, adoption looks healthy, but leaders still can’t answer the simple question of what AI is actually changing.
It’s metric theatre — dashboards that look busy, but don’t help leaders decide what to do next.
How we’ll know it’s working (for real)
The most grounded leaders I know are starting to frame AI progress less as a destination and more as a signal-reading exercise.
They look for evidence that decisions are being made earlier, with fewer reversals.
They pay attention to whether teams can explain why an AI-assisted recommendation was followed – or ignored.
They notice whether confidence in outcomes is rising, or quietly eroding when something goes wrong.
They also accept that some benefits will remain indirect for longer than finance teams are comfortable with, but they insist on clear narrative discipline.
If you can’t explain, in plain language, how AI is changing the way work gets done and why that matters, the value probably isn’t real yet.
One small move I’ve seen work is deceptively simple: change the question in executive reviews. Instead of asking, “What did this AI initiative deliver?” leaders ask, “What decisions did this change - and how do we know those decisions were better?”
That shift forces teams to surface learning, confidence levels and unintended consequences, not just outputs. It also creates space to talk honestly about uncertainty without framing it as failure.
In practice, this often shows up through lightweight governance rituals rather than grand programs. In one approach, a small cross-functional group – AI lead, risk or compliance, product, security and a business sponsor – meets briefly during a pilot to review live behaviour, not slideware. The focus isn’t approval; it’s shared understanding.
Over time, habits like this build a common language for value that goes beyond adoption counts and static dashboards.
Right now, many organisations are moving quickly, with a lingering sense that they’ll only fully understand what they’ve built after the fact. That’s not negligence – it’s the nature of the moment.
The leaders who will do well aren’t waiting for perfect metrics to appear. They’re naming the discomfort early, staying honest about what they can and can’t yet measure, and building confidence through disciplined movement rather than false certainty.
AI value rarely arrives with a single headline result. It accumulates through better decisions, fewer surprises, and a growing ability to explain, with credibility, why the organisation is doing what it’s doing.
That’s the strategy most organisations are still searching for.
*Executive note for leaders:*
For leaders who prefer a distilled, board-ready version, I’ve prepared a one-page Executive Briefing Note that captures the core argument of this piece.
Why subscribe?
If this perspective resonates, subscribing ensures each new edition arrives directly in your inbox.
The Ripple Effect is written for leaders navigating digital transformation, AI, and organisational change in complex organisations.
One thoughtful insight at a time.
No hype. No trends lists. Just carefully observed leadership patterns.
- Stuart
(With occasional help from Springsteen, my Border Collie, who reminds me that clarity comes from movement 🐾)
Connect
LinkedIn – follow for leadership reflections ↗
Ripple Strategic – my advisory work ↗


