Miracle Agholor is a UK-based technology professional, researcher, and emerging thought leader at the intersection of AI, agentic systems, and data sovereignty. He authored a book on AI and agentic intelligence, alongside research papers and articles exploring the societal impact of intelligent systems. Miracle combines research expertise with practical execution, having worked in UK public institutions and founded Vision Labs in Nigeria and VisionMinds LTD in the UK. His work focuses on building scalable technologies that drive social and economic value, particularly for Africa and other underrepresented regions, while promoting ethical and responsible AI adoption. In this interview CHISOM MICHAEL, he discusses AI’s impact on society, the need for human accountability in autonomous systems, the importance of data sovereignty, and the responsibility of technologists to ensure technology benefits people fairly.
What first raised your concern about who controls intelligent systems and how that control is exercised?
I began thinking about this issue while watching AI systems evolve from helping with decision outcomes to influencing outcomes in ways that people are not even aware of. While working as the Engineering Product Manager for 4T5NG, a new Startup emerging from Lite-Intel, I witnessed firsthand the struggles of product development groups in dealing with the responsible use of AI. Estimated global data indicates that AI has the potential to add trillions of dollars to the global economy in the coming decade. But these gains are not evenly distributed.
Where should the boundary sit between human judgement and autonomous decision-making?
The line is drawn when there are consequences that are impossible to reverse. While systems are being optimised and analysed, when these have implications for opportunity, security, or rights, human accountability is necessary. In most OECD nations, AI is being used to facilitate access to employment and services, while in 4T5NG, human accountability in our product deployment was essential.
When institutions deploy agentic systems, what failure tends to appear before anyone notices it?
The first failure is normally a loss of context. The metrics are showing normal function while the system is drifting away from what is required in reality. In my experience with public institutions in the UK and startups in Africa, invisible drift precedes visible damage. International assessments of digital public services suggest successful uptake, while trust is undermined.
How has working across UK public institutions and African innovation spaces changed your view of authority in digital systems?
I discovered that authority is established by relevance and impact, rather than technology per se. In the UK, systems build trust through institutional legitimacy. In African innovation ecosystems, trust is established by results. Africa has the youngest population in the world, but harnesses only a fraction of its economic digital value. Through my experiences with these two ecosystems, I’ve learned that authority in AI is about alignment with what people want.
In practical terms, how does data sovereignty affect an individual rather than a government?
Data sovereignty defines who ultimately enjoys the value of a particular type of data that people create. It can be noted that patients, consumers, and citizens create data that ultimately leads to billions of value created each year, but they do not enjoy the value directly because, without sovereignty, there will be nothing that people can do when there are inaccuracies in the data created.
What does a society give up when it relies on technologies designed without its context in mind?
The risk of losing self-determination and trust in institutions can arise in society if the systems are either imported or mismatched. The systems can worsen the inequalities in society while forcing society to adapt to the systems instead of adapting the systems to society. The data from the World Bank and AfDB show that if digital systems do not consider the reality in society, the adoption rate can rise, but the benefit to society can decline.
Why do discussions about AI often exclude the people most affected by its outcomes?
Because decision-makers emphasise efficiency and scalability, whereas affected individuals see implications firsthand-job loss, service denial, or unequal distribution. Statistics about employment and labour from the UK, the US, and economies in Africa suggest that low-visibility jobs are impacted by automation, but their concerns and needs are not considered.
How do you decide whether a problem deserves to be solved with technology at all?
In determining if problems deserve to be solved with technology or not shouldn’t be a topic to be overemphasised, as its inventions have brought many positive transformations to many institutions, be it government and private institutions.
Should the question be, do my solutions address root problems or just speed up symptoms? At 4T5NG, my team analysed whether adding AI helped outcomes or just optimised problematic processes. “Administrative overhead now accounts for a large share of healthcare expenditures,” and AI can ease this burden and “improve quality only if it’s done in a way that’s accountable.”
What did leading youth-driven technology initiatives teach you about influence without formal power?
While I served as the NYSC Youth President for Ogun State, I initiated IT awareness and digital sensitisation campaigns. I have come to realise that power and influence come through shared purposes and trust, rather than through position or authority. Building teams and achieving shared goals and outcomes have been pivotal in understanding that leadership is about harnessed collective energy.
As intelligent systems shape access to work, capital, and services, what obligation do technologists carry?
Technologists must think about consequences and build accountability. Intelligent systems increasingly mediate opportunities, and design decisions mean winners and losers. In my own practice, bridging innovation ecosystems in both the UK and Africa, I see that it’s important for technologists to take responsibility for using AI for the public good, reducing inequity, and aligning with societal needs. Technologists and technology have the task of forecasting future consequences and not just optimising initiatives for performance. With systems increasingly influencing access to labour, capital, and services, designers are confronted with ethical choices. To ignore this task is to be non-neutral.


