The rapid rise of artificial intelligence has not only unlocked new realms of innovation but has also sparked an alarming era of digital deception. South Africa’s Minister of Health, Dr Aaron Motsoaledi, has sounded the alarm following a disturbing incident in which deep-fake technology was used to fabricate a video falsely implicating one of the country’s most respected scientists in Covid-19 misinformation.
The fake interview, which has been widely circulated on social media, shows Professor Salim Abdool Karim—Director of the Centre for the AIDS Programme of Research in South Africa (CAPRISA)—appearing to make unfounded claims about the Covid-19 vaccine causing harm and fatalities. The video, which also includes the likeness of SABC news anchor Oliver Dickson, was confirmed to be digitally manipulated. Dickson has stated he was never involved in such an interview.
“This is not just misinformation—it’s a calculated, AI-driven disinformation campaign designed to mislead the public, discredit life-saving vaccines, and promote fake health products,” said Dr Motsoaledi. “We are seeing a new level of deception, where advanced technologies are being weaponised for immoral profiteering.”
Jadene Tarryn Pillay
According to the Department of Health, the video is linked to individuals with business interests promoting a so-called heart remedy via mail order. The product has no scientific validation and is reportedly leaving consumers feeling worse, not better.
CAPRISA has issued a formal statement condemning the video and labelling it as “fake news,” clarifying that neither Professor Abdool Karim nor the organisation has any association with the content.
“Abdool Karim refutes in its entirety the contents of this latest fake video that is currently being circulated on social media sites and other communication applications,” the statement read.
The Department has joined forces with CAPRISA to curb the spread of the video and has issued public alerts on its platforms. It further urged citizens to trust only health information issued by the National Department of Health and the South African Health Products Regulatory Authority (SAHPRA).
“Minister Motsoaledi condemns in the strongest terms possible the fake news campaign by these charlatans who, for nefarious reasons, are determined to create confusion among the people for the sake of immoral profiteering,” the Department added.
In response to the incident, computer engineer and artificial intelligence specialist from Queensburgh, Jadene Tarryn Pillay, emphasised the urgent need for public awareness and ethical responsibility in the use of AI.
“AI is blurring the line between what’s real and what’s fabricated,” Pillay said. “Tools like deep-fakes and generative models can produce hyper-realistic videos, images, and audio that look authentic but are entirely false. This undermines traditional indicators of truth, like visual or audio evidence, which the public used to trust without question.”
Commenting on the faked interview with Professor Abdool Karim, Pillay noted: “When AI is used to fabricate such convincing content, it not only misleads the public but also erodes trust in reputable institutions and media outlets. This reshaping of reality demands a shift in how we engage with information. We now need to ask not just, ‘Did I see it?’ but, ‘Was it verified, and by whom?’”
She added that developers and users of AI have a shared duty to prevent harm: “Safeguards such as watermarking AI content and promoting transparency must become standard practice. Ethical responsibility is no longer optional—it’s essential.”
Pillay stressed the importance of inclusion and digital literacy, especially in under-resourced areas. “In developing countries, access to basic digital education is key. Workshops, community talks, and short videos in local languages can help people understand how AI affects their daily lives—from the news they consume to the services they access.”
She further explained: “People don’t need to be experts to ask important questions like, ‘Who made this tool? What is it being used for? Could it be causing harm?’ That kind of awareness is the first step towards meaningful engagement.”
Despite growing concerns, Pillay remains optimistic about AI’s potential when used ethically. “AI can improve lives—especially in vulnerable communities—by expanding access to education, detecting disease early, and supporting rural development. But it must be built with ethical intent, guided by diverse voices, and used to uplift, not exploit.”
On the fear of AI replacing jobs, she clarified: “AI is more likely to change jobs than eliminate them. It can take over repetitive tasks, giving people more time for creative and strategic work. With investment in up-skilling and education, AI can open doors rather than close them.”
She concluded: “To ensure AI serves us, we need clear rules, transparency, and public involvement. But more importantly, we must remove the fear. AI isn’t here to replace us—it’s here to work with us.”
In the wake of this scandal, the Department of Health has appealed to all South Africans to remain vigilant, to question the validity of online content, and to report misinformation. As AI-generated disinformation becomes more sophisticated, protecting the public from digital manipulation will require collaboration between policymakers, technologists, media platforms, and communities.






