The government has taken a decisive step towards shaping South Africa’s digital future with the release of a draft national artificial intelligence policy — a framework that seeks to address the economic promise of AI while guarding against the social, ethical and constitutional risks that come with rapid technological change.
The draft policy document, released last week, states that the rapid advancement of AI technologies globally and their transformative potential creates an urgent need for South Africa to harness these innovations to remain competitive and relevant.
“The rapid evolution of AI technology presents unprecedented opportunities and unique challenges for South Africa,” it says.
The development of the policy is a strategic imperative to guide the responsible and ethical development, deployment and utilisation of AI across all sectors, the document says. It warns that, in the absence of a coherent national policy and co-ordinated strategies, “South Africa risks falling behind in leveraging AI to address its developmental challenges and improve the wellbeing of its citizens”.
At its core, the policy recognises AI as a general-purpose technology capable of transforming every corner of the economy — from classrooms and clinics to farms, factories and public administration. But it also acknowledges that AI cannot be deployed through a one-size-fits-all approach in a country still grappling with deep inequality, uneven digital infrastructure and skills shortages.
The government has identified education, health care and agriculture as critical priority sectors for early and impactful AI implementation, supported by AI-enabled public administration.
The rapid evolution of AI technology presents unprecedented opportunities and unique challenges for South Africa.
— Draft national artificial intelligence policy
In education, AI is positioned as a tool to personalise learning, identify at-risk learners and improve system-wide planning — provided that deployment is accompanied by safeguards around data privacy and bias. The policy also places strong emphasis on integrating AI literacy across primary, secondary and tertiary education, aiming to build a pipeline of skills that can sustain long-term innovation.
In health care, AI’s potential to improve diagnostics, resource allocation and preventive care is highlighted, particularly in a system under severe strain.
Agriculture is presented as a sector in which AI-driven precision farming, climate risk modelling and market intelligence could boost productivity and food security, especially for small-scale and emerging farmers.
Across all sectors, the document stresses, AI deployment depends on robust digital infrastructure and widespread connectivity. It outlines targeted interventions to expand digital infrastructure, including the development of AI hubs and supercomputing facilities aimed at supporting research, start-ups and small enterprises.
While the national framework will set overarching priorities, norms and ethical standards, sector-specific working groups will be established to develop tailored implementation roadmaps for industries such as manufacturing, energy, infrastructure, transport, trade, agriculture, health care and education. These working groups, aligned to government clusters, will be tasked with translating policy intent into practical strategies, guidelines and budgets suited to the realities of each sector.
AI is projected to contribute about $19.9-trillion (R325-trillion) to the global economy by 2030, according to estimates cited in the document.
A key theme of the document is ethical governance, while the provisions on fairness, transparency, accountability and bias mitigation feature prominently, reflecting concerns that poorly governed AI systems could entrench discrimination.
Experts at law firm Bowmans said that, at a high level, the draft policy positions AI as a tool to support inclusive economic growth, job creation, cost reduction and a developing Africa.
It says AI policy must be grounded in South Africa’s constitutional framework, human rights standards and socioeconomic context, including the need to address inequality and the digital divide. In this regard, the draft policy makes it clear that constitutional values and public interest should guide the deployment, development and use of high-impact and high-risk AI systems.
“Businesses developing, deploying, procuring, or relying on AI systems in South Africa should begin considering how their existing governance, compliance, data, risk and contracting frameworks may need to evolve in anticipation of a more structured AI regulatory environment.”
Ahmore Burger-Smidt, director and regulatory head at Werksmans Attorneys, described the draft policy as an “ambitious and necessary step” toward positioning the country in the global AI economy.
“It shows clear alignment with leading international frameworks, particularly in its adoption of a risk-based approach and its emphasis on ethical AI principles,” she said. “However, while the vision is compelling, the policy often stops short of providing the level of detail businesses and institutions will need to operate with certainty.
“This is especially true when it comes to defining risk categories and setting enforceable compliance standards.”
Burger-Smidt said of greater concern is the gap between principle and practice on issues such as privacy and data protection. Although the policy signals alignment with the Protection of Personal Information Act (Popia), it does not adequately address how core concepts such as purpose limitation and data minimisation will function in AI systems that rely on large, repurposed datasets.
“Similarly, the rights of individuals in the context of automated decision-making are underdeveloped, potentially placing South Africa behind more mature jurisdictions such as the EU and the UK.”
The draft policy also raises questions about implementation, she said.
It proposes the introduction of multiple new AI-focused institutions, including a national AI commission, an AI ethics board, an AI regulatory authority, an AI ombuds office, a national AI safety institute and an AI insurance superfund modelled on the Road Accident Fund, designed to compensate people harmed by AI-driven decisions.
“This could create fragmentation in an already complex regulatory landscape, without clear guidance on roles, co-ordination or resourcing,” said Burger-Smidt.
Business Times






