Ad image

The rise of autonomous intelligence and the need for adaptive cybersecurity

BusinessDay
6 Min Read

The world of artificial intelligence is on the brink of its most significant evolution yet. Beyond the reactive systems we’ve known, a new paradigm is emerging: agentic AI. These are not merely sophisticated tools; they are autonomous systems capable of perceiving, reasoning, planning, acting, and learning to achieve complex goals with minimal human intervention. As a cybersecurity professional specialising in Generative AI Governance, Risk, and Compliance (GenAI GRC), I see this as both an unprecedented opportunity and a critical new frontier for risk management.

To truly understand agentic AI, we need to differentiate it from its predecessors. Traditional AI functions on predefined instructions, reacting to direct inputs. Generative AI, while impressive in producing new content, still largely depends on specific prompts. Agentic AI, on the other hand, is proactive. It takes initiative based on its understanding of the environment and objectives, seamlessly combining content creation with autonomy and goal-oriented behaviour. These systems function through a continuous cycle, often called PRAPA: Perception, gathering data; Reasoning, interpreting information; Planning, developing strategies; Action, executing those plans; and crucially, Adaptation, refining future behaviour through continuous learning. This dynamic feedback loop means Agentic AI’s behaviour and its inherent risk profile are constantly evolving, making static GRC frameworks insufficient.

Agentic AI is set to revolutionise sectors from autonomous vehicles and financial services to healthcare and cybersecurity itself. Its capacity to operate at “machine speed”, executing millions of operations constantly, enhances the potential impact of any errors, biases, or malicious actions. A single mistake could quickly cascade through vital systems, causing extensive damage. The true power of agentic AI lies in its ability to coordinate various AI models and external tools. It uses “backend tool calling” to gather real-time information, optimise workflows, and automate tasks by interacting with APIs and databases. While Large Language Models (LLMs) often form the core, Agentic AI provides the essential capability for LLMs to act. For complex problems, multi-agent systems—groups of specialised agents working together—are frequently employed. This extensive interconnectedness, while enabling powerful problem-solving, also increases the attack surface, creating a “supply chain” risk where security is only as strong as its weakest link.

Agentic AI presents a new set of risks that go beyond traditional cybersecurity threats, mainly due to their inherent autonomy, ongoing learning, and adaptability. These systems are non-deterministic; they learn, evolve, and act based on dynamic inputs, making them fundamentally unpredictable. These risks can manifest as internal agentic risks, such as misconfigured or compromised agentic AI tools leading to intellectual property theft or privacy breaches. Additionally, external agentic risks include malicious actors leveraging adaptive AI to continuously develop attacks, learn from each attempt, operate without human input, and potentially bypass traditional security measures entirely. Specific vulnerabilities include memory poisoning, tool misuse, cascading hallucinations, intent breaking, and misaligned behaviours.

Beyond the technical, agentic AI fundamentally disrupts traditional risk frameworks. When highly autonomous AI systems make independent decisions, the critical question of “who’s on the hook”—the developer, the manufacturer, the user, or the AI itself—becomes paramount. This leads to “moral crumple zones”, where humans are left bearing the blame for failures of complex AI systems they didn’t truly control. This isn’t just a technical glitch; it’s a profound legal and ethical challenge that can erode public trust and create significant liability. Addressing these demands requires clear legal frameworks, robust transparency, and tiered regulatory approaches. Furthermore, the speed at which adversarial AI can launch highly targeted, real-time evolving attacks creates a significant gap with human-paced defence mechanisms. Traditional patch cycles and response protocols are simply too slow. This imbalance requires a fundamental shift towards automated, adaptive, and real-time GRC capabilities that can match the velocity of AI-driven threats, elevating GRC beyond mere compliance to genuine cyber resilience.

To navigate these complexities, organisations need a governance, risk, and compliance framework as dynamic and adaptive as the AI systems themselves. The “Adaptive GRC Shield” is a conceptual model designed to proactively manage the unique risks of agentic AI by moving beyond static controls to a continuous, living system. This model comprises five core components. First, proactive risk sensing involves continuous, real-time monitoring and predictive analytics to identify emerging risks before they materialise. Second, dynamic control mechanisms are needed to implement adaptive controls that can adjust in real time based on the AI’s behaviour and identified risks. Third, explainable decision pathways (XDP) ensure transparency and auditability of autonomous decisions, mandating immutable, cryptographically signed logs for every decision point and models that provide clear explanations. Fourth, continuous assurance and compliance shift from periodic checks to “always-on” assurance by automating control testing and providing real-time dashboards and alerts, ensuring adherence to evolving regulations like the EU AI Act. Fifth, human-centric oversight and collaboration position humans as essential partners and final arbiters, establishing AI ethics boards and developing human-AI collaboration frameworks where human judgement retains the final say.

Adetunji Oludele Adebayo [Cybersecurity Professional, GenAI GRC Lead]

TAGGED:
Share This Article