In November 2022, OpenAI’s public release of ChatGPT ignited what is arguably the fastest diffusion of a general-purpose technology in modern history. Almost overnight, generative artificial intelligence (AI) moved from research labs into classrooms, offices, studios, boardrooms, and public debate. Predictably, it also triggered anxiety. Sections of the academic community and segments of the workforce began to denounce AI’s use in studies and at work, often irrespective of its extraordinary gains in scale, speed, and accessibility.
This is not new.
In fact, history has seen this movie before, most notably in the fierce resistance to calculators in schools from the 1960s through the 1980s. As someone who has wholeheartedly embraced AI, so much so that at GenAI Learning Concepts Ltd, we unapologetically use the mantra “AI or Die”. I find the parallels instructive, even sobering. They tell us less about technology and more about how societies react to disruption.
The calculator wars: A forgotten panic
From the mid-1960s through the 1970s, and peaking dramatically in 1986, teachers across parts of the United States and Europe protested the introduction of calculators into classrooms. Outside the National Council of Teachers of Mathematics (NCTM) annual meeting that year, demonstrators marched with placards bearing slogans such as “UNITE, OBJECT, REJECT”.
Their fears were stark and sincere. Calculators, they argued, would:
• Destroy mental arithmetic
• Weaken critical thinking
• Make students intellectually lazy
• Produce a generation unable to “think independently.”
Some educators went further, organising what were called “calculator dumps”, theatrically discarding calculators to symbolise resistance. The underlying belief was simple: if students did not manually master arithmetic, learning itself would collapse.
Fast-forward to today, and replace calculators with large language models such as ChatGPT, Copilot, Gemini, or Claude. The rhetoric is eerily familiar.
AI in academia: The new oral Panic
Today’s critics warn that AI will:
• Destroy original thinking
• Encourage plagiarism
• Replace human judgement
• Erode academic integrity
Students using AI tools are accused of “cheating”, scholars of “outsourcing cognition”, and institutions of “lowering standards”. Some universities have rushed to ban AI tools outright, often before understanding them.
But history teaches us something crucial: technologies that automate lower-level tasks do not eliminate thinking; they elevate it. Calculators did not end mathematics. They freed learners from repetitive computation and allowed deeper focus on modelling, problem-solving, and abstraction. Engineers did not become worse; they became better. Science accelerated. Finance scaled. Space travel became possible.
AI is following the same trajectory, only at a much grander scale.
Read also: ChatGPT hit by severe prompt-injection bugs enabling silent data theft
From arithmetic to intelligence amplification
Artificial intelligence, first formally named at the Dartmouth Conference in the summer of 1956, has long promised to augment human capability. What generative AI has done since 2022 is to democratise that augmentation.
Consider the breakthroughs already achieved:
• AlphaFold, which solved the 50-year-old protein-folding problem, is accelerating drug discovery and biomedical research.
• AlphaGo, which defeated the world’s best human players by inventing strategies never seen before
• Robotics and computer vision, enabling safer manufacturing, precision agriculture, and medical surgery
• Natural language systems, lowering barriers to education, research, and creativity across languages and cultures
These are not toys. They are civilisational tools.
Just as calculators extended numerical ability, AI extends cognitive reach. It allows a student in Umuahia to access world-class tutoring, a researcher in Lagos to analyse datasets once reserved for supercomputers, and a small business to operate with the sophistication of a multinational.
Opposing this outright is not prudence; it is historical amnesia.
Ethics, responsibility, and the limits of Naïveté
That said, blind techno-optimism is as dangerous as fear-driven rejection. AI, like every powerful technology before it, carries risks. Generative AI can be misused to produce deepfakes, automate fraud, amplify disinformation, and power malicious cyberattacks. These are real threats, not hypothetical ones.
Deepfake videos can undermine trust in elections. Synthetic voices can facilitate financial scams. AI-assisted malware can scale attacks beyond traditional defences. In cybersecurity, attackers increasingly use AI for reconnaissance, social engineering, and adaptive intrusion.
But here is the critical point: the answer to malicious AI is not less AI; it is better AI, governed responsibly.
Guarding against the dark side
Institutions, academic, corporate, and governmental, must respond with maturity, not panic. Practical safeguards include:
1. AI governance frameworks
Clear policies defining acceptable use, accountability, transparency, and human oversight.
2. AI-enhanced cybersecurity
Using AI defensively, to detect anomalies, predict threats, and respond in real time, just as criminals attempt to use it offensively.
3. Digital literacy and ethics education
Teaching how to use AI responsibly, not pretending it does not exist. Students should learn prompt literacy, verification skills, and ethical reasoning.
4. Assessment redesign
Moving away from rote memorisation toward applied reasoning, project-based evaluation, and oral defence, much as mathematics education evolved post-calculator.
5. Regulatory collaboration
Aligning institutions with emerging AI standards, data protection laws, and sector-specific regulations.
These are governance challenges, not reasons for rejection.
“AI or Die”: A provocation, not a threat
When I say “AI or Die”, it is not a slogan of intimidation; it is a statement of economic and intellectual reality. Nations, institutions, and individuals that refuse to adapt will simply be outpaced. The choice is not whether AI will shape the future; it already is. The choice is whether we shape AI ethically or allow others to do so without us.
The calculator protesters did not stop using calculators. They delayed adaptation and, in some cases, disadvantaged their students. Today, no serious educator would argue that banning calculators was the right decision. Instead, we learnt how to integrate them wisely.
AI deserves the same treatment, only faster, because the stakes are higher.
In summary: Choosing the right side of history
Every transformative technology triggers fear before acceptance: the printing press, electricity, computers, the internet, calculators, and now AI. Each time, society ultimately recognises that progress does not eliminate human intelligence; it redefines where human value lies.
Artificial intelligence will not replace thinking. It will replace unthinking. It will not destroy learning. It will destroy obsolete methods of assessment. It will not eliminate ethics. It will force us to take ethics more seriously than ever before.
History is offering us a familiar lesson. We can march with placards shouting “UNITE, OBJECT, REJECT”, or we can do what educators eventually did with calculators: ADAPT, GOVERN and ADVANCE.
I know which side I am on.
Sonny Iroche is the CEO of GenAI Learning Concepts Ltd in Nigeria, specialising in artificial intelligence research and application. Iroche is a postgraduate alumnus of the University of Oxford’s Artificial Intelligence for Business programme. He was a Senior Academic Fellow at the African Studies Center of the University of Oxford for the 2022-23 academic year. Additionally, he is a member of both the Technical Working Group for UNESCO’s AI Readiness Assessment Methodology and the Nigeria National AI Strategy Committee.



