Artificial Intelligence (AI) has transformed the advancement in information technology to another level, which has also changed the landscape of cybersecurity. As a result, most corporate organisations are increasingly integrating AI into their corporate infrastructure to improve their operational efficiencies, make informed data-driven decisions, and enhance their customer experience. Similarly, governments must recognise the implications of AI for national security, and some of the reasons why AI must be treated as a core strategic concern are explored in this article, as well as how policymakers in developing regions can act decisively.
AI is not only transforming how we live and work in today’s interconnected world but also reshaping the foundations of national security. The stakes have never been this high since the introduction and implementation of AI in our everyday lives, starting from border surveillance and cyber defence to misinformation campaigns and autonomous warfare. Unfortunately, AI policy remains underdeveloped and fragmented for many developing nations, and there is an urgent need to consider AI as a national security imperative that demands top-level policy attention.
The Dual-Use nature of AI
The potency of strengthening security across critical sectors with AI cannot be underestimated. For instance, the predictive analytics power of AI can anticipate and prevent insurgent activities or cyberattacks before they materialise. Additionally, the correct adoption of AI-powered surveillance systems can analyse video feeds in real time and send signals to the designated security personnel, and suspicious behaviours or anomalies can be identified as they occur. Furthermore, AI tools can monitor online platforms to detect coordinated disinformation campaigns before they gain traction.
However, these same technologies can be exploited by adversaries. AI tools can be weaponised through various means, such as deepfaking, where individuals are impersonated. For instance, a nation’s leaders or top military officials could be mimicked by deepfakeAI, thereby potentially causing confusion or panic. Additionally, an autonomous AI drone technology primarily designed for surveillance or combat purposes could also be hijacked by malicious attackers. It is worth noting that most of this negative use of AI is made possible when there are no clear policies, governance, and ethical oversight of the AI technologies, which could result in a nation being reactive rather than proactive in the face of these threats.
Building institutional capacity
While the digital infrastructure is evolving faster than the regulatory framework in most developing nations, the focus on economic opportunities, education, or innovation is more prevalent in nations where national AI strategies exist, but these must be complemented by robust national security perspectives.
Building institutional resilience requires deliberate investments in human capital, infrastructure, and inter-agency coordination. Security agencies must be trained to understand AI’s capabilities and limitations. Policymakers must develop a shared language across defence, digital, legal, and civil society sectors to evaluate the implications of AI-driven security systems. Furthermore, government data must be protected with robust cybersecurity protocols to prevent manipulation or compromise.
Recommendations for developing nations
To address the emerging frontier of AI and national security, the following actions are critical:
Governments should articulate a comprehensive framework that integrates AI into national security planning, covering defence, intelligence, cybersecurity, and infrastructure protection.
Additionally, there is a pressing need for investment in local AI talent and capabilities within the country because absolute dependency on foreign-made AI solutions can introduce backdoors, surveillance risks, or national vulnerability. Building local capacity ensures that systems can be tailored to national priorities and cultural contexts while safeguarding sovereignty.
Policy action must also include legal frameworks that define acceptable uses of AI in national security. These policies must provide answers and clarify some salient issues, such as the boundaries of AI surveillance, ensuring compliance with international human rights standards, and who should be held accountable when the AI systems malfunction or are used unethically, among others.
Furthermore, the roles of international cooperation cannot be overlooked as cybersecurity threats and AI misuse transcend borders. Therefore, regional and continental alliances should be considered and included in AI security as a key agenda, as this will enhance the sharing of intelligence, harmonise standards, and conduct joint training with other countries to build collective resilience.
Finally, as proactive legislation, ethical safeguards, technical investment, and institutional readiness will define whether developing nations are prepared for the AI age, the policymakers, lawmakers, civil societies, and the private sectors also have their roles.
Conclusion
The time to act is now. AI is not just a tool of innovation but a vector of power, influence, and risk. Integrating it into national security policies for developing nations like Nigeria is no longer optional; it is essential.
About the author
Nathaniel Akande is a renowned information analyst with over 8 years of experience in threat intelligence, incident response, vulnerability management, quality assurance, governance, risk, and compliance (GRC). He holds an MBA and an M.Sc. in Cybersecurity and is a Certified ISO 27001 Lead Implementer. Adept at implementing data governance, identity and access management, and aligning operations with standards like GDPR, ISO 27001, and NIST, Nathaniel has led enterprise projects involving data governance and AI risk management.


