When the world’s largest accounting body announces a return to in-person examinations in order to combat AI-enabled cheating, it may believe it is defending standards. In reality, it is signalling something far more revealing: institutional anxiety in the face of irreversible technological change.
The Association of Chartered Certified Accountants (ACCA) has cited the rapid escalation of AI-assisted misconduct, students photographing exam questions and outsourcing answers to generative models, as justification for abandoning remote testing. Similar retrenchments are underway across universities and professional services firms, many of which are quietly reverting to pen-and-paper assessments and physical interviews.
This response is understandable. It is also profoundly shortsighted.
History offers a cautionary parallel. When calculators first entered classrooms in the mid-twentieth century, educators resisted fiercely. They argued that machines would erode numeracy, weaken discipline, and reward intellectual laziness. Calculators were banned from examinations. None of this preserved mathematical excellence. Instead, the eventual embrace of calculators liberated human cognition from mechanical repetition and enabled deeper conceptual reasoning.
Artificial intelligence represents the same moment, only magnified exponentially.
What is rarely acknowledged in today’s professional debates is that resistance to AI is not driven solely by concerns over ethics or integrity. It is also driven by fear, fear of economic displacement and professional dilution. AI does not merely accelerate tasks; it compresses time, lowers costs, and undermines long-standing fee structures that depend on labour-intensive processes.
Nowhere is this more evident than in accountancy. For decades, the preparation of annual financial statements justified months of human effort, large teams, and substantial billing. today, AI systems can ingest entire ledgers, reconcile accounts, identify anomalies, and generate draft financial statements at a fraction of the time and cost. The implication is uncomfortable but unavoidable: when intelligence scales and marginal cost collapses, professional scarcity disappears.
It is therefore reasonable to ask whether the retreat from AI-enabled examinations is less about safeguarding competence and more about preserving relevance. When professional bodies position AI primarily as a threat, they risk revealing an unspoken concern, that gatekeeping authority built for a pre-AI world is losing its potency.
While some professions retrench, others are advancing at extraordinary speed.
In biomedical science, AI has transformed drug discovery by solving problems that resisted decades of human effort. Protein-structure prediction, once one of biology’s most intractable challenges, is now performed at scale, accelerating the development of treatments for cancer, neurodegenerative diseases, and rare genetic disorders. Research cycles that once took years are now compressed into months.
In strategic reasoning, AI has redefined the boundaries of expertise. When an AI system defeated the world’s best players at the game of Go, it did not simply outperform humans, it introduced strategies that had never been conceived. Elite players now study machine-generated moves not as curiosities, but as sources of insight.
The same pattern is visible across development and public policy. In agriculture, AI-driven climate and soil analytics have enabled smallholder farmers to dramatically increase yields by optimising planting schedules and resource use, often via simple mobile interfaces. In logistics and humanitarian response, AI-powered optimisation systems now deliver vaccines, food aid, and emergency supplies faster and at lower cost, saving lives where inefficiency once proved fatal.
These are not speculative futures. They are present realities.
It was therefore unsurprising when The Economist, in its June 26, 2025 essay “Who Needs Accenture in the Age of AI?”, questioned whether traditional consulting models can survive unchanged. Though framed around a single firm, the argument extends across the professional services spectrum. If AI can analyse data, model risk, generate scenarios, and draft recommendations at scale, what remains distinctive about professions that sell process rather than judgment?
Accounting now stands squarely in this dilemma.
Professional bodies may still regulate examinations, credentials, and codes of conduct. But they do not regulate technological inevitability. The next generation entering the workforce will be AI natives, individuals for whom intelligent systems are not disruptive novelties but everyday tools. They will not respect professions that define integrity as technological abstinence. They will demand relevance, fluency, and ethical leadership in an AI-augmented environment.
This does not imply the erosion of standards. It implies their reinvention. The future accountant will not compete with machines on speed or calculation. Instead, value will migrate to areas where accountability cannot be automated: judgment, interpretation, governance, and ethical oversight.
Professions that recognise this shift early will evolve. Those that mistake control for competence will linger, temporarily protected by legacy structures, before fading into obscurity.
The choice is not whether AI will reshape accountancy. It already has. The choice is whether the profession will lead that transformation, or merely react to it, one examination rule at a time.
About the Author
Sonny Iroche is an Oxford-trained Artificial Intelligence scholar and Executive Chairman of GenAI Learning Concepts Ltd, a pan-African AI consulting and training firm. He is a member of Nigeria’s National Artificial Intelligence Strategy Committee and UNESCO’s Technical Working Group on AI Readiness Assessment. A former Senior Academic Fellow at the University of Oxford’s African Studies Centre, he advises boards and public institutions on AI governance, ethics, and workforce transformation.


