Across boardrooms and government offices, the excitement around artificial intelligence and predictive analytics is palpable.
Algorithms now promise to tell us which customers are likely to churn, which loan applicants are risky, or which maintenance issues deserve priority. Despite the billions spent on analytics tools, many organisations quietly admit that performance barely improves after the rollout. Why? Because the real challenge isn’t the algorithm itself; it’s how humans use (or ignore) its advice.
A new study published in the Strategic Management Journal by Hyunjin Kim, Edward L. Glaeser, Andrew Hillis, Scott Duke Kominers, and Michael Luca, titled Decision Authority and the Returns to Algorithms, explores this problem in depth.
The researchers examined how much of the promised value of data-driven systems disappears when frontline professionals retain the authority to overrule them. Their evidence offers a powerful reminder that the gains from technology depend as much on governance and culture as on code.
The study analysed a large-scale pilot in an inspections department that used two predictive models – one simple and one sophisticated – to rank restaurants by likely risk of violation. On paper, both algorithms were far superior to the old manual system.
They predicted risky locations more accurately and should have helped inspectors catch more serious problems. But when the results were implemented, the city saw almost no improvement in actual outcomes. The reason? Inspectors frequently ignored the rankings, choosing instead to visit familiar or convenient sites or to balance workloads geographically.
Those human overrides effectively erased the model’s predictive advantage.
This pattern, excellent predictions but weak results, occurs far beyond public-sector inspections. Banks, hospitals, logistics companies, and retailers all experience it.
Employees and managers trust their experience more than the system, or they work toward a different goal altogether. A credit officer might bypass a model’s rejection of a borderline applicant to hit a monthly loan-volume target. A maintenance team might skip a flagged site to save travel time. These choices may be understandable, but collectively they dilute the value of analytics investments.
The lesson for leaders is clear: deploying an algorithm is not just a software project; it’s a change in decision rights. Managers must clarify when staff are expected to follow the model and when professional judgement may override it. A simple policy such as ‘follow the model unless one of these documented exceptions applies’ can make a huge difference. Equally important is transparency—every override should be recorded with a brief reason. This allows leaders to audit patterns, identify legitimate local insights, and detect biases or habits that undermine performance.
Aligning incentives also matters. If employees are rewarded for speed or convenience rather than accuracy or risk reduction, they will naturally game the system. Performance metrics should match the outcomes the algorithm was designed to improve. For instance, an inspection unit measured on the number of visits completed per week may resist a model that recommends far-apart sites; one measured on actual risk reduction will embrace it.
The study also emphasises the value of feedback loops. When staff can see whether their overrides outperformed or underperformed the algorithm, both human and machine learning improve. Over time, this builds trust in the system and sharpens professional intuition. Conversely, when overrides are never reviewed, the same costly patterns repeat indefinitely.
An everyday observation from ride-share experiences helps illustrate this dynamic. In Nigeria, some drivers often discuss with passengers before deviating from the app’s suggested route, using local knowledge of traffic or road conditions to find quicker paths. These thoughtful overrides, guided by experience and consent, can lead to better trips. In contrast, drivers in countries such as the United States typically follow the app’s directions exactly, trusting the system for fairness and consistency. Both approaches make sense in their contexts, but they reveal how culture, trust, and institutional expectations shape the balance between human discretion and algorithmic authority. Organisations face a similar choice every day: when to trust the model and when to trust the expert.
For Nigerian companies investing heavily in digital transformation, this insight is particularly timely. Many banks, telecoms, and energy firms now use predictive analytics for credit scoring, fraud detection, and maintenance scheduling. But the technology will only pay off if decision-making processes evolve too. Without clear governance, human overrides and conflicting incentives can cancel out years of data science work. The returns from the use of algorithms come not from the math itself, but from disciplined execution.
Looking ahead, the most promising frontier is explainable AI. When algorithms can clearly show why they made a recommendation—by revealing the factors that drove a score or ranking—users are more likely to trust and follow it. Combining this with structured override logs could finally bridge the gap between prediction accuracy and operational performance.
The bottom line is that better predictions do not automatically lead to better decisions. Human judgement remains indispensable, but it must be integrated systematically, not haphazardly. Organisations that define when to follow the model, require transparency for exceptions, and align incentives with true objectives will finally realise the value their algorithms promise. In the age of AI, leadership is still about accountability, not automation.
image.png
Omagbitse Barrow is the Chief Executive of Learning Impact, an Abuja-based strategy and management consulting firm.


