|
Getting your Trinity Audio player ready...
|
Artificial Intelligence (AI) is now shaping how people work, learn, shop, and even make health and/or financial decisions.
Nevertheless, as the world rushes to embrace AI, one big question remains: who benefits and who gets left behind?
In my recent research, published in October 2025 by IEEE, my team and I explored how AI systems can unintentionally disadvantage regions like Africa due to the poor representation of Global South data and contexts.
I also spoke about this challenge on BBC Radio, highlighting how fairness in AI is not just a technical issue but a social one that affects opportunity, equality, and trust.
What “AI fairness” really means
AI fairness simply means that an AI system should treat people and groups fairly, without bias or discrimination. If an AI tool is trained mostly on data from Europe or North America, it might not understand or serve users in Africa correctly.
For example, facial recognition systems have been shown to misidentify darker skin tones more often than lighter ones. Voice assistants may struggle with African accents.
Even food recognition models can fail to correctly identify Nigerian dishes, a challenge we studied in our research.
These might seem like small errors, but when AI is used in hiring, healthcare, or banking, unfairness can have serious consequences.
The representation gap
Most AI systems are built using data from the Global North, where research funding and infrastructure are concentrated. This means that billions of people from the Global South – Africa, Latin America, and South Asia – are under-represented in the datasets that power AI models.
When our data, culture, and realities are missing, the technology does not truly “see” us. It makes decisions based on someone else’s world, not ours.
This lack of inclusion can quietly widen inequality, even when the creators of the technology don’t intend it.
Why this matters for Africa
As Nigeria and other African countries expand the use of AI in education, healthcare, and governance, we must ensure these systems work for our people, not against them.
Without fairness and proper representation, we risk building systems that ignore local languages, misinterpret local data, and fail to capture the diversity of African life.
This can affect everything from loan approvals to job applications, even the accuracy of how we can fix this.
To make AI fair and inclusive, we need to:
Build local datasets – so our realities are reflected in global AI systems.
Support African researchers and engineers – who understand local contexts and challenges. Encourage collaboration between governments, universities, and startups to set fairness standards.
Raise awareness so that everyday people understand how AI impacts them and can demand accountability.
This work focused on creating inclusive datasets and testing how bias appears in computer vision models. By adding more African data and measuring fairness across cultures, we can build AI systems that are more reliable and globally representative.
A shared responsibility
Fair AI is not just a technical goal; it’s an ethical and economic one. When systems are fair, people trust them. When they’re fair, they scale better across regions.
The Global South has a unique opportunity: to lead by example in building AI systems that are diverse, transparent, and locally relevant, rather than simply importing technologies designed elsewhere.
Conclusion
The rise of AI gives us a choice: to copy the inequalities of the past or to design a fairer digital future.
If we want AI to work for everyone, then everyone’s data, voice, and perspective must count.
The future of AI fairness depends on it, and the Global South must be at the table, not on the menu.


