In many organisations today, artificial intelligence is discussed with urgency. Leaders feel pressure to “do something with AI” to remain competitive, modern, and relevant. Vendors promise efficiency and superior decision-making. Dashboards look impressive. Pilot projects are launched with enthusiasm. Yet months later, many organisations quietly discover that very little has changed. Contemporary research points to a sobering reality: AI initiatives fail not because the technology is weak, but because organisations are not ready for what the technology demands. Culture, not code, is the decisive factor.
This insight is supported by recent research published in Administrative Sciences. In a 2024 study, Obrain Murire examined how artificial intelligence reshapes everyday work practices inside organisations. The study shows that AI does far more than automate tasks. It subtly alters how authority is exercised, how decisions are justified, and how employees define expertise. In organisations that encouraged learning, questioning, and experimentation, AI tools were gradually absorbed into decision-making routines. In more rigid, hierarchical, or fear-driven environments, the same tools were resisted, ignored, or used superficially.
“It shows that organisations succeed when leaders deliberately cultivate environments where adaptation is expected, experimentation is normal, and learning is rewarded.”
These academic findings echo practitioner insights shared in the pages of Harvard Business Review. For example, in the article “Readiness Reimagined: How to Build a Change-Seeking Culture”, Jeff Pacheco argues that most transformation efforts fail not because employees resist change, but because leaders underestimate how much cultural reinforcement change requires. It shows that organisations succeed when leaders deliberately cultivate environments where adaptation is expected, experimentation is normal, and learning is rewarded.
A similar warning appears in research-informed commentary from MIT Sloan Management Review. In “Why AI Demands a New Breed of Leaders”, Hoque, Davenport and Nelson argue that AI fundamentally changes how organisations create value and therefore requires a different leadership mindset. They show that traditional command-and-control leadership struggles in AI-enabled environments, where insight can emerge from unexpected places and decision-making must blend human judgement with data-driven input. They also highlight some of the real-life impacts, like Zillow (web-based property management) losing hundreds of millions of dollars and suffering a steep drop in its share price due to flawed AI adoption.
Taken together, the academic evidence and practitioner perspectives converge on a powerful conclusion: AI exposes existing cultural weaknesses rather than fixing them. In many organisations, employees quietly fear that AI will make their roles redundant. Managers worry that algorithmic recommendations will undermine their judgement. Departments resist sharing data because information has long been a source of power. Leaders announce ambitious AI goals without explaining how decisions will change or what success looks like. Under these conditions, AI tools may be installed, but they are rarely trusted, fully used, or acted upon.
A familiar pattern follows. An organisation introduces an AI-driven analytics tool intended to improve decision quality. The system produces insights that contradict long-held assumptions. Junior staff hesitate to raise questions because challenging data feels risky. Senior leaders hesitate to act because they cannot fully explain how the system arrived at its recommendations. Meetings become longer rather than shorter. Decisions slow down instead of accelerating. Eventually, the tool is dismissed as “too complex” or “not practical”.
The issue here is not technical sophistication. It is psychological safety! When employees feel safe to question outputs, test assumptions, and admit uncertainty, AI becomes a learning partner. When fear dominates, AI becomes an expensive ornament. Organisations that succeed with AI treat it as a prompt for dialogue rather than a final answer. So, organisations looking to deploy AI more aggressively need to evaluate their organisational and team cultures. The answer is not a plain “Our culture is not yet ready, so we should abandon AI.” No, it should be more nuanced – “These are the cultural gaps that exist, and these are the urgent steps we need to start changing that culture so that we can take full advantage of the value that AI and other new technologies offer. ”
Cultural readiness for AI, therefore, requires a shift in leadership behaviour. Leaders must signal that learning matters more than perfection, that questioning data is acceptable, and that early mistakes are part of capability building. They must clarify how human judgement and algorithmic insight will work together, rather than allowing ambiguity to fuel resistance. Importantly, cultural readiness does not mean abandoning discipline or accountability. It means redefining them. Accountability shifts from defending decisions to explaining them. Discipline shifts from rigid compliance to thoughtful experimentation. Leaders remain responsible for outcomes, but they are no longer expected to have all the answers in advance.
The central lesson from contemporary research is simple but profound: AI readiness is not about infrastructure or software maturity. It is about mindset, trust, and leadership clarity. Organisations that rush into AI without addressing these foundations often end up disappointed. Those who invest first in cultural readiness unlock far greater value from the same technologies. For executives navigating the AI conversation, the most important question may not be “Which tool should we deploy?” but rather, “Is our culture ready to learn from what the tool will reveal?”
Omagbitse Barrow is a strategy and organisational effectiveness consultant and Chief Executive of the Abuja-based Learning Impact NG.


