In the wake of global conversations around the impact of artificial intelligence in education, a new research from Nigerian AI expert, Jamiu Idowu, is raising fresh ideas about how and when student outcomes should be predicted by algorithms.
Idowu, the lead author of a study published by Elsevier in Computers and Education: Artificial Intelligence, alongside co-authors Adriano Koshiyama and Philip Treleaven, is calling attention to a fundamental but overlooked question: “When should we even start making predictions about students in the first place?”
This simple question led the team to propose the Optimal Time Index (OTI) – a novel scoring mechanism that helps determine the best moment in a school term or academic cycle to use AI for student outcome prediction.
The study analyzed fully anonymized data covering over 30,000 students at the Open University, UK, testing AI models that used institutional data, assessment records, and virtual learning environment (VLE) logs to predict whether students were likely to pass or be at risk of failure or withdrawal. What stood out wasn’t just the accuracy of the models – it was how early or late these predictions were made.
“If you predict too early, you might not have enough data to be accurate. If you predict too late, you might miss your window to actually intervene and help a struggling student. The Optimal Time Index is our way of balancing that tension,” said Idowu, who holds a Master’s with Distinction in Artificial Intelligence from University College London and previously led the HP IDEA program in Nigeria.
The OTI is a practical indicator that combines three key elements:
● Timeliness (how early in the course a prediction is made),
● Performance (accuracy of the model), and
● Opportunity cost (ensuring that interventions and resources are allocated efficiently).
Idowu and his team found that, surprisingly, Day 60 of a typical 255-day course hit the sweet spot. “That’s the point where we saw a big jump in model performance, but also left enough time in the term to actually act on the insight,” he said.
Asked whether algorithmic fairness compromises model performance, Idowu was clear: “We showed they don’t have to. You can pursue both fairness and accuracy of the algorithms without compromising one for the other.”
For full access to the research, see: https://doi.org/10.1016/j.caeai.2024.100267


