The AI Strategy No One Sees:
Culture, Leadership Nerves and Quiet Experiments
January, 2026
Executive Brief
Many organisations talk about AI strategy as if it is primarily a technology decision. In practice, the harder challenge is organisational honesty: helping people understand how work will change and supporting them to adapt. This piece explores why the most important part of AI strategy is often the part leaders struggle to say out loud.
Ripple Insight
AI strategy becomes real the moment leaders are honest with their people about how work will change — and help them navigate that transition.
What’s really going on
I’m in a senior leadership meeting and I’ve put AI on the table. I start where most executives are comfortable: practical examples where AI is already lifting productivity by working alongside humans - summarising, searching, clustering information, so people can focus on judgment, conversations and decisions.
Then I shift to the harder part: what this actually feels like for our people, and why that matters commercially.
AI is arriving fast. In the rush, I’m seeing organisations forget basic human-centred principles and the simple truth that culture still determines whether transformation sticks. Developers I speak to worry about AI-assisted “vibe coding” and what it means for their careers. Senior content strategists and writers see their roles squeezed almost overnight. Contact centre staff and translators quietly tell me they assume the writing is on the wall. In some cases, you can hear people starting to disengage - which is exactly how organisations lose the capability they need to execute any serious AI agenda.
So I make the case that if we’re serious about AI, culture and talent can’t be treated as an HR side topic. Who we include, how we talk about changing roles, and whether we invest in real upskilling will determine whether AI delivers value - or simply accelerates attrition.
The response is predictable. I’m reminded that this is “just like when the internet arrived”; that it created more jobs than it destroyed; that people will adapt. When I push on culture, I hear slogans about “critical thinking” rather than a clear commitment to structured upskilling, role redesign and leadership accountability. I leave the room thinking that if we keep treating AI as a historical analogy instead of a change in how work actually gets done, we’ll burn the very people we need to make it work.
When it didn’t land
Two moments still stick with me when I think about how AI can quietly go wrong.
The first was a big-tech partnership discussion. An early-stage AI chatbot was being pushed hard while still effectively in beta. The experience was immature, governance hadn’t been fully worked through, and the commercial model relied on usage-based pricing that could scale costs quickly if adoption succeeded. We risked trading contact-centre costs for a different, less predictable line item - the opposite of what boards usually expect.
What worried me most was the sequencing. Core web and portal foundations still needed work, yet we were trying to layer a sophisticated AI experience on top. The desire to jump on the new thing was strong; the roadmap discipline to line it up behind the fundamentals was not. That was a miss.
The second was cultural. AI demands a level of deliberate, skills-based change that most organisations aren’t used to designing. Instead, I often see culture handled through reassurance and analogy: “this will all net out in the end.” Generic learning programs stand in for tailored support for roles already changing. The hard work - being explicit about how roles will evolve, where uncertainty remains, and what the organisation is willing to commit to - gets deferred. It’s not malicious, but it leaves a gap between what leaders say and what people on the ground need to hear.
Human first, AI alongside
Some roles will go and not come back. Pretending otherwise helps no one. The leadership opportunity is to deliberately refocus the talent you already have into roles where humans and AI together create more value than either could alone.
From a CX and digital perspective, the pattern is already clear. AI is strong at research synthesis, first-pass drafting and option generation. Teams can use it to cluster insights, generate alternatives and prototype faster, while humans bring judgment, taste and context.
Content roles illustrate this well. Treating generative AI as a wholesale replacement for content is a fast way to damage trust and brand. Elevating strong writers into content strategist roles - designing campaigns, knowledge bases and integrated web-and-chat experiences - and then using AI to extend and adapt their work respects the craft while lifting impact.
The same logic applies to design and engineering. AI can accelerate interface exploration, image manipulation and accessibility checks, but accountability for inclusive, on-brand design remains human. On the engineering side, AI-assisted coding and greater reuse make sense, as long as experienced engineers remain responsible for architecture, privacy, security and performance. AI can generate code; only humans can own the trade-offs.
Contact centres follow a similar pattern. Moving simpler requests into well-designed self-service is rational. But the value of human agents increases when they focus on complex, emotionally loaded or high-risk interactions. AI summarisation, translation and knowledge suggestions can support them without erasing their role.
All of this has implications for how teams are structured. Increasingly, effective organisations are organising around products and journeys rather than functions, with capability leads looking after craft and cross-functional teams owning outcomes. In those models, AI becomes an amplifier for teams built around value, not an add-on to yesterday’s org chart.
Psychological safety underpins all of this - not as a feel-good initiative, but as execution infrastructure.
Leaders need to be explicit about where roles will change, where risk exists, and where opportunity lies. That means mapping skills, being honest about uncertainty, and creating safe conditions for experimentation. If people don’t feel safe admitting what isn’t working, the risk to the business goes up, not down.
People and programs showing promise
In high-governance environments, I’ve seen focused pilots land well when they’re deliberately modest and well-sequenced. They combine internal teams, targeted external expertise and existing platforms, delivering tangible benefits while building trust for scale.
One example used AI as a thinking partner for compliance and governance rather than as a customer-facing feature. Key internal documents were combined with external standards - WCAG 2.2, Victorian accessibility policy, federal discrimination law and sector guidance - to rebuild obligations from first principles. AI helped cross-check coverage, surface gaps and make the material searchable in plain language. That allowed a concise, credible picture to be put in front of executives, followed by a roadmap aligned to existing work rather than a standalone program.
Another pilot augmented contact centre work instead of replacing it. Calls were transcribed, summarised and categorised automatically, but nothing flowed into the CRM without human review. Agents received tailored retraining, understood exactly where AI supported their work, and remained accountable for what was recorded. The result was better data and reduced wrap-up time without eroding trust. It also created a foundation for future capabilities, including multilingual support.
The quiet pattern underneath
1. Whose mental models dominate
AI strategy often starts as a customer and experience-led conversation, then quietly tilts into a technology roadmap once major platforms enter the room. The strongest work I’ve seen is where CX and technology genuinely co-lead: tech brings realism and sequencing; CX brings outcomes and lived experience. When either lens dominates for too long, quality suffers.
2. How talent is understood
Right now, confidence often outweighs scar tissue. Tech-savvy generalists can dominate AI conversations, while experienced engineers and architects - people who have delivered multiple waves of change - risk being treated as background infrastructure. The organisations that win will pair the energy of AI enthusiasts with the judgment of experienced practitioners and actively protect deep work, not just loud work.
3. What governance is for
Good governance isn’t about slowing things down; it’s about creating a runway. Clear risk tiers, decision rights and human-in-the-loop expectations allow teams to move faster with confidence. In uncertain territory, governance shifts the posture from “no, unless” to “yes, if” - which is exactly what AI work needs.
A little ripple worth trying
If you want to cut through the AI noise without launching a program, try this. Pick your approved chat tool and use it as a shared radar for eight weeks. Block 30 minutes a week. Bring together a small cross-functional trio - for example, tech, CX or operations, and risk or legal.
Each week, their job is simple: produce one page titled What we saw, what we think, what we’ll test next. Use the tool to scan customer pain points, emerging AI use cases and regulatory constraints. Don’t build business cases. Curate patterns and identify one or two low-risk experiments that fit inside existing guardrails.
The value isn’t the prompts; it’s the cadence. Over time, you build a muscle for asking better questions and turning AI from abstract hype into a grounded, governance-friendly conversation about value. It’s deliberately small and slightly dull - and that’s the point. By 2026, what most boards want isn’t another AI initiative. It’s leaders who can keep AI on the agenda without turning the organisation into a science project.
*Executive note for leaders:*
For leaders who prefer a distilled, board-ready version, I’ve prepared a one-page Executive Briefing Note that captures the core argument of this piece.
Why subscribe?
If this perspective resonates, subscribing ensures each new edition arrives directly in your inbox.
The Ripple Effect is written for leaders navigating digital transformation, AI, and organisational change in complex organisations.
One thoughtful insight at a time.
No hype. No trends lists. Just carefully observed leadership patterns.
- Stuart
(With occasional help from Springsteen, my Border Collie, who reminds me that clarity comes from movement 🐾)
Connect
LinkedIn – follow for leadership reflections ↗
Ripple Strategic – my advisory work ↗


