Artificial intelligence has shifted from a speculative future technology to a force shaping everyday life. It fuels productivity tools, powers predictive systems, and enables new business models across sectors from finance to healthcare. Yet as the technology accelerates, so do questions about its societal impact. One of the most pressing debates in tech policy today is not whether AI should be regulated, but how it should be regulated and crucially, whether the regulation should target AI itself or the way it is used.
Nigeria is not standing still in the face of rapid AI adoption. Lawmakers and regulators have introduced proposals to create a dedicated legal framework for AI governance, including a bill that would establish a National Artificial Intelligence Council, require registration and licensing for AI developers and deployers, and impose risk-based ethical standards on AI systems deployed in the economy. The proposed legislation, expected to be finalised by early 2026, focuses on transparency, fairness, accountability and risk assessments for high-impact applications, while granting regulators authority to demand information, issue enforcement directives and suspend unsafe systems marking one of the first comprehensive AI regulatory efforts on the African continent.
This approach recognises an important reality: artificial intelligence is not a single product or industry, but a general-purpose technology. Like electricity or the internet before it, AI’s basic mechanisms are neutral; the risks and rewards stem from how it is applied. Attempts to regulate the technology itself treating AI as an inherently harmful entity risk stifling innovation without meaningfully protecting the public.
Instead, the focus on regulating use allows policymakers to concentrate resources where they matter most: in contexts where AI’s impact touches safety, privacy, economic fairness or democratic processes.
Transparency is not merely a bureaucratic box-tick. When AI systems affect eligibility for services, pricing decisions, or legal outcomes, individuals and businesses should be able to understand, interrogate and, where necessary, challenge those decisions. Regulatory guidance emphasises that explainability enhances trust and enables rights to contest outcomes — a critical feature as AI penetrates sectors where the stakes are high.
At the same time, there are genuine risks tied directly to the behaviour of AI systems. Security vulnerabilities, biases entrenched in training data, and the potential misuse of generative models for harmful purposes are not merely theoretical threats. These risks have prompted debate over whether developers should disclose data sets and intellectual property used in training, allow independent audits, and require explicit consent frameworks for users. Thus, parts of the AI Bill propose measures that would require businesses developing or using AI to provide independent auditing access and clear user consent models steps that interact directly with the deployment of AI, rather than the underlying algorithms themselves.
Yet even here, a nuanced perspective is needed. Regulation that is too rigid or broad can create barriers to entry for smaller firms, disadvantage innovators and ultimately entrench the power of large incumbents. The UK government’s AI Opportunities Action Plan reflects this balance, acknowledging the importance of safety and assurance while explicitly prioritising innovation and growth. It commits to supporting safe AI infrastructure and training, without imposing heavy compliance burdens that could suppress emerging firms.
The alternative unfettered development is equally problematic. Without clear standards, generative models can be used for malicious purposes with increasing sophistication. Phishing campaigns, automated fraud, and deepfake impersonations are already evolving beyond the capabilities of most small businesses and consumers to mitigate. In contexts where enterprises have limited cybersecurity capacity, the absence of regulatory guardrails amplifies risk.
For emerging markets, the stakes take on additional dimensions. Informal sectors, like the millions of SMEs operating across Africa, are already highly exposed to operational risks due to weak documentation, limited payment verification processes, and minimal regulatory oversight. As AI becomes embedded in business tools from automated invoicing to customer engagement, deploying these systems without appropriate governance structures could inadvertently widen inequality or expose vulnerable enterprises to new threats. This is not reason to oppose AI, but to champion regulatory approaches that enable secure, inclusive adoption.
Regulation also has a normative function. It signals to markets what standards are expected, reducing uncertainty for investors, consumers, and innovators alike. When firms know that transparency, fairness and accountability are not optional, they internalise responsible practices that can drive long-term trust, a vital currency in digital ecosystems. Conversely, absence of regulation can erode trust, leaving users exposed and slowing overall adoption.
The real policy pivot, then, is not between “regulate AI” and “let AI flourish.” It is between indiscriminate regulation that treats all AI as a hazard and targeted governance that manages risk where it matters while safeguarding innovation. A focus on use with rules calibrated to context, impact and risk level enables this.
Emerging policy frameworks like those seen in the UK illustrate this balance. They avoid overly broad restrictions on technology itself while embedding principles that protect citizens and businesses from harm. They promote explainability and accountability without smothering creative and commercial application. This risk-based approach acknowledges that AI’s benefits in healthcare, commerce, logistics and beyond will continue to grow only if people trust the systems that underpin them.
Artificial intelligence is not going away. Its evolution will only accelerate. The challenge for policymakers and for technology leaders is not to bottle this genie but to ensure that its power is harnessed safely and fairly.
Regulation, in this sense, does not mean containment. It means responsible governance of use a strategy that protects society’s interests while allowing innovation to thrive.



