Schneider Electric is urging Nigerian data centre operators, cloud providers and policymakers to overhaul existing infrastructure as artificial intelligence accelerates demand for high-density, energy-intensive computing.
The company says the rapid adoption of generative AI across banking, telecoms, healthcare and manufacturing is exposing limitations in legacy facilities that were never designed for GPU-heavy AI training or large-scale inferencing.
AI is driving “one of the most significant shifts the global IT industry has ever seen,” Schneider Electric said in new guidance released this week. While global debate often centres on the energy footprint of AI model training, the company argues that the long-term pressures on Nigeria will come from the inferencing real-time execution of AI models used in fraud detection, diagnostics, or retail analytics.
Read also: Schneider Electric partners FUTO to boost energy skills training
“Training and inferencing are fundamentally different workloads, and each places different demands on infrastructure,” the company said. AI training, which relies on clusters of high-performance GPU servers, can push rack densities beyond 100 kilowatts — levels that require liquid cooling, high-capacity electrical backbones and advanced thermal management systems.
Inferencing, once considered less intensive, is quickly catching up as models grow more complex. Schneider Electric says some Nigerian enterprises are already deploying workloads that demand 40–80 kW per rack, and that demand will escalate as banks, hospitals and logistics providers embed AI deeper into operations.
By 2030, the company projects that half of Nigeria’s new data center builds will need to support 40–80 kW per rack, while a quarter will exceed 100 kW to power large-scale training clusters. The shift, it says, will force the industry to rethink cooling, electrical architectures, network design and workload orchestration.
Many Nigerian firms begin their AI journey in public cloud services, which offer scalability and access to global GPU infrastructure. But sustained inferencing at scale requires local capacity, especially in regulated sectors, Schneider Electric noted. Banks and healthcare providers increasingly require low-latency, locally controlled environments, driving demand for high-density colocation and on-premise deployments. The company warns that a rack operating at 20 kW today “may need to double capacity within two years,” making modular systems essential.
Growth in edge computing is also reshaping the landscape as smart retail, telecommunications and mobility applications move AI closer to users. These compact sites face constraints on space, cooling and power, intensifying the need for rugged, efficient and high-performance designs capable of operating in challenging environments.
Read also: Schneider expands liquid cooling after Motivair acquisition
Schneider Electric is also pushing for operators to prioritise automation and software as density rises. The company says digital systems — including data center infrastructure management (DCIM), electrical power monitoring (EPMS), and building management software — are becoming essential to manage real-time performance, prevent failures and optimise energy use.
“Software is no longer a background tool for data centers in Nigeria,” said Ajibola Akindele, Schneider Electric’s country president for West Africa. “It is the intelligence that allows operators to anticipate changes in demand, optimise energy use, and ensure resilient performance even in the face of power constraints.”
Akindele said Nigeria’s AI future will be shaped by three trends: more power-dense multimodal models, the rise of low-latency on-site inferencing, and rapid expansion of AI-as-a-Service offerings. He urged operators to build for density rather than sheer scale, arguing that flexibility, modularity and efficiency will be crucial as the country enters a new phase of intelligent computing.
“By integrating intelligence across power, cooling and monitoring systems, operators will be better positioned to support today’s AI workloads and the more complex applications to come,” he said.


