In recent years, Nigeria has witnessed a remarkable rise in digital innovation. From Lagos to Abuja, startups are reimagining finance, healthcare, education, and agriculture using the power of artificial intelligence. Multinational companies are integrating AI into their African operations. Governments are experimenting with data-driven public service delivery. Across the continent, digital identity systems, smart infrastructure projects, and algorithmic decision-making are steadily becoming part of daily life.
This is progress worth celebrating. Yet, beneath the surface of rapid adoption lies a critical question: What are we doing to ensure that these technologies, particularly AI, are used ethically, fairly, and securely? Without robust systems to govern the data that powers AI and to hold its outputs accountable, Nigeria and Africa at large risk of automating inequality, weakening trust in institutions, and undermining hard-won digital rights.
A recent global report by PwC, “Ensuring Effective AI Utilisation: The Critical Role of Data Privacy, Data Governance and AI Governance,” offers both a warning and a roadmap. While the report draws primarily from Middle Eastern insights, its relevance to Nigeria and the wider African continent is profound. It articulates clearly that the transformative potential of AI cannot be divorced from the foundational need for strong governance, not just of the data that fuels AI models, but of the ethical and legal frameworks that must guide their deployment.
Nigeria, Africa’s most populous nation and largest economy, stands at the centre of this conversation. If we are to lead Africa’s digital future, we must also lead in the responsible governance of AI.
The Governance Imperative in an African Context
Across the world, AI is revolutionising decision-making at unprecedented speed. But in countries with fragile legal infrastructures and limited regulatory enforcement, which includes many in Africa, the risks associated with AI adoption can quickly outpace its benefits.
Nigeria’s digital economy is booming. The country accounts for over 30% of Africa’s tech startups and has attracted billions of dollars in venture capital in recent years. Lagos has emerged as a hub for AI-driven fintech, healthtech, agritech, and more. Startups use AI to assess loan eligibility, diagnose disease, optimise logistics, and personalise education. Government agencies are beginning to explore predictive analytics to manage urban development, public health crises, and service delivery.
Yet, as AI systems become more embedded in core infrastructure and services, the absence of robust governance is increasingly evident. For instance, many data-rich applications are built on weak or non-existent consent mechanisms. Personal data is often repurposed without users’ knowledge. The line between innovation and intrusion is becoming dangerously blurred. Worse still, many AI systems operating in Nigeria today rely on foreign datasets or Western-trained models that do not reflect our social, cultural, or economic realities, making them ill-suited for local contexts and prone to unfair outcomes.
This is not a theoretical risk. In other parts of the world, flawed algorithms have already denied individuals fair access to jobs, credit, healthcare, and justice. Africa cannot afford to inherit these mistakes. In a society where institutional trust is fragile and digital literacy is still evolving, any erosion of confidence in AI technologies could have lasting social consequences.
The Data Privacy Gap
At the heart of the AI conversation is data. Nigeria currently lacks a comprehensive, enforceable data protection law. While the Nigerian Data Protection Regulation (NDPR) issued in 2019 laid an important foundation, it remains limited in scope and enforcement. The Nigeria Data Protection Bureau (NDPB), created to oversee data protection efforts, has shown commendable intent, but it needs full legal backing, adequate funding, and political support to function effectively.
As AI systems increasingly rely on personal and biometric data, from voice recordings and medical histories to digital identities and financial transactions, Nigerians must be protected by clear, enforceable rights over their data. This includes the right to know how their data is being used, the right to refuse its use for automated decision-making, and the right to demand accountability when things go wrong.
PwC’s report outlines how failures in data privacy, such as lack of consent, purpose creep, or weak anonymisation, can quickly become ethical and legal liabilities. In Nigeria, where digital surveillance is on the rise and public sector databases are often porous, these concerns are not speculative. They are real, immediate, and growing.
To move forward, Nigeria must enact its pending Data Protection Bill, not just in name, but in function. The law must empower the NDPB to investigate breaches, penalise violations, and promote best practices. Most importantly, it must send a clear message: innovation is welcome, but not at the expense of people’s rights.
Fixing the Foundation: Data Governance for Reliable AI
AI is only as good as the data it learns from. If the data is incomplete, inaccurate, biased, or mismanaged, the outputs, however sophisticated, will be flawed. In a country like Nigeria, where data infrastructure is patchy and often fragmented, this poses a serious challenge.
Too often, AI projects are built on datasets that are outdated, poorly maintained, or simply irrelevant to the realities of the populations they seek to serve. Public and private organisations alike struggle with defining data ownership, managing metadata, and ensuring data quality across siloed systems. Without national standards for data architecture, definitions, and stewardship, we cannot expect AI systems to perform reliably or ethically.
Moreover, the informal nature of many sectors in Nigeria, from agriculture to transportation, means that data collection is inherently difficult. If AI systems are trained only on data from the formal economy, they risk excluding millions of people from services, credit, or public benefits.
To address this, Nigeria needs a coordinated, long-term strategy for data governance. This includes establishing common metadata standards, fostering public-private data stewardship agreements, training professionals in ethical data handling, and investing in digital infrastructure that can support, secure and interoperable data systems. A National Data Governance Framework, aligned with regional goals under the African Union’s Digital Transformation Strategy, would be a meaningful step forward.
The Ethical Challenge: Governing the Algorithms Themselves
Perhaps the most complex piece of the puzzle is AI governance itself, that is, how we ensure that algorithms behave in ways that align with our national values, constitutional rights, and democratic aspirations.
At present, Nigeria has no legal or institutional framework for evaluating the ethics, fairness, or accountability of AI systems. There is no requirement for developers to conduct impact assessments, disclose model logic, or test for bias. There are no national standards for algorithmic transparency or explainability. As such, Nigerians have no meaningful recourse when AI systems get things wrong, as they inevitably will.
The PwC report stresses that without ethical guidelines, risk assessment mechanisms, and accountability frameworks, AI can make decisions that are legal but morally unacceptable. In a country where regulatory capacity is limited and access to justice can be slow, this is especially dangerous.
Nigeria must therefore consider creating an AI Governance Council, a multi-stakeholder body comprising regulators, technologists, ethicists, civil society, and academia. Such a council could establish voluntary codes of conduct, review high-risk AI applications, and help shape legislative frameworks grounded in Nigeria’s socio-political realities. It could also ensure that Nigeria’s voice is heard in global discussions on AI ethics and regulation.
A Pan-African Responsibility
While this article focuses on Nigeria, the issues raised here resonate across Africa. Countries like Kenya, Ghana, Rwanda, and South Africa are also grappling with similar challenges. What we need is a continental conversation, one that aligns innovation with justice, growth with accountability, and ambition with restraint.
The African Union has made progress by launching the African Union Convention on Cyber Security and Personal Data Protection and supporting initiatives like Smart Africa. But these efforts must be accelerated, coordinated, and localised.
Africa’s strength lies in its ability to leapfrog, not just technologically, but ethically. We have a chance to build AI systems that are inclusive, community-oriented, and transparently governed from the ground up. But this will only happen if countries like Nigeria take the lead, not just in coding solutions, but in governing them wisely.
Conclusion: Choosing Trust Over Speed
AI is not inherently good or bad. It is a mirror, reflecting the systems, values, and data we feed into it. In Nigeria, that mirror reveals a landscape full of promise, but also vulnerability. If we continue down the path of innovation without governance, we may build fast, but we will not build fairly, securely, or sustainably.
The time for leadership is now. Nigeria must treat data privacy, data governance, and AI ethics not as technical matters for developers and regulators alone, but as national priorities that cut across every sector of our economy and every level of our society.
In the coming years, how we govern AI will determine how deeply our people trust it. And in a world increasingly shaped by algorithms, trust may be the most valuable asset of all.
Opebiyi is a Data and AI Governance Specialist and AI Ethics Advocate. He works at the intersection of digital innovation, regulatory compliance, and public interest. He writes and speaks regularly on topics such as data protection, data/AI governance, and digital policy in emerging markets.






