When Oluwaseun Tolulope Adesanya joined Qualtrics to build enterprise AI products, she brought a profile rarely found in product management: formal legal training and years of professional legal practice in Nigeria, combined with advanced expertise in building large-scale intelligent systems. She initially worked on AI infrastructure for employee experience systems that generate recommendations affecting thousands of employees across global organisations.
Most recently, she was selected to join a newly formed three-person product management team pioneering agentic AI for Experience Agents, working at the frontier of autonomous systems that can take independent actions without human intervention. Her selection for this nascent team, which is tackling challenges with no established best practices, reflects the value of her unique combination of legal training and AI product expertise as the field moves from analytical tools to truly autonomous agents”
Adesanya operates at the forefront of enterprise AI innovation, where technical performance must be matched by accountability, governance, and real-world impact. Her legal background enables her to anticipate regulatory risk, ethical failure points, and organisational consequences inherent in deploying AI systems at scale. In a rapidly evolving field where intelligent systems increasingly influence human decision-making, she is widely regarded as part of a small cohort of leaders shaping how enterprise AI is responsibly designed, deployed, and scaled, establishing her as one of the leading figures in the field.
Q: You started as a lawyer. What made you leave legal practice for technology?
Oluwaseun: I loved the analytical rigour of law, but I became frustrated with its limitations. Legal practice addresses individual disputes without driving systemic change. I kept thinking, what if, instead of interpreting regulations after problems occur, we could build systems that prevent those problems from arising? That question led me to business school at USC, where I studied entrepreneurship and innovation with the explicit goal of learning how to build solutions rather than just interpret rules.
During my master’s program, I founded two companies. Nomadine focused on travel management, and Workfficient addressed performance management and workplace bias. Both ventures taught me how to validate business assumptions through customer discovery and iterate based on feedback. More importantly, both revealed that I was genuinely interested in building technology products, not just running businesses.
Q: How does legal training apply to building AI products?
Oluwaseun: It’s more relevant than people realise, especially for the work I do now. I develop testing infrastructure for agentic AI systems—autonomous AI that makes independent decisions without human approval. A single failure could result in these systems making incorrect decisions, affecting thousands of employees across multiple organisations, potentially triggering multimillion-dollar losses.
Building safeguards for that requires exactly what lawyers do: anticipating how systems might fail before failures occur, designing protocols for edge cases that seem unlikely, and assuming people will use systems in ways you didn’t intend. Lawyers spend careers thinking about risk systematically. That’s precisely what you need when building testing frameworks for autonomous AI.
Legal training also teaches you to mediate between competing interests. As a product manager, I’m constantly balancing engineering constraints against customer needs, business objectives against technical feasibility, speed against quality. That’s identical to what lawyers do when finding solutions that satisfy multiple stakeholders with conflicting priorities.
Q: What does your current work involve?
Oluwaseun: I work on Testing and Launching for agentic AI initiatives. My responsibilities centre on three areas: developing pre-deployment testing frameworks that validate autonomous AI behaviour before it reaches clients, working on agent versioning systems that allow us to iterate safely with rollback capabilities, and contributing to knowledge base systems that ensure autonomous agents access accurate information when making decisions.
The challenge is that we’re building infrastructure for technology that doesn’t have established best practices yet. Agentic AI is fundamentally different from traditional machine learning because these systems take independent actions. The testing frameworks need to catch potential failures for scenarios we may not have imagined yet.
Q: You also teach entrepreneurship while working full-time in product management. How does that inform your work?
Oluwaseun: I write about business concepts through StartaSprout and serve as a Subject Matter Expert for entrepreneurship curriculum development. Teaching forces me to clarify my thinking. When you explain complex concepts to beginners, you have to break down assumptions and identify core principles.
That discipline translates directly into product work. I constantly communicate technical decisions to non-technical stakeholders and ensure cross-functional teams understand why certain infrastructure choices matter. The principles I teach about validating ideas through customer discovery also apply to building new AI capabilities. You’re testing assumptions and iterating based on real user needs, not just building technically impressive features.
Q: What does AI product management actually require that differs from traditional product roles?
Oluwaseun: As AI systems become more autonomous and face regulatory scrutiny, technical fluency alone isn’t sufficient. You need people who understand regulatory frameworks, can assess systematic risk, and think carefully about how technology intersects with human behaviour in institutional settings.
These skills often come from backgrounds like law, policy work, and organisational development, not necessarily computer science. CS programs teach algorithmic thinking and implementation, but they don’t typically address the regulatory and risk management challenges that determine whether enterprise AI actually succeeds at scale.
The gap will widen as AI advances. Companies building systems that make independent decisions affecting thousands of people need product managers who think like lawyers, understand compliance as regulators do, and approach risk like auditors. Technical implementation matters, but you also need frameworks to ensure systems function safely and responsibly.
Q: Are hiring practices catching up to this reality?
Oluwaseun:Not yet. Job descriptions still emphasise technical credentials that signal traditional paths: machine learning coursework, software engineering experience. Companies that continue prioritising those backgrounds over demonstrated ability to navigate regulatory complexity and assess systematic risk will struggle to build AI products that meet enterprise requirements.
I’m not saying technical skills don’t matter. But for certain problems, especially around governance, testing, and risk management for autonomous systems, you need people who’ve spent years thinking about how rules function in practice and how to design systems robust enough to handle unexpected scenarios. That’s what legal training provides.
Q: What advice would you give someone from a non-traditional background considering product management?
Oluwaseun: Your background isn’t a deficit to overcome. It might be exactly what the role needs. The key is understanding which product problems your specific skills solve better than traditional paths would. For me, that’s building infrastructure for high-stakes AI systems where systematic risk assessment and regulatory thinking matter as much as technical implementation.
You’ll also need to learn the technical foundations. I’m not coding these systems, but I need to understand how they work well enough to make informed product decisions. That’s learnable. What’s harder to teach is the judgment that comes from years in fields like law, where you develop frameworks for thinking about risk, compliance, and how systems interact with human behaviour.
The future of AI product management will require diverse backgrounds because the challenges are multidisciplinary. We need people who can bridge technical implementation, regulatory requirements, organisational dynamics, and ethical considerations. That’s rarely the expertise of one person, but it increasingly determines whether powerful technology can be deployed responsibly at scale.


