AI ethics and the African experience

AI ethics and the African experience

Freely available and largely unregulated tools make it possible for anyone to generate false information and fake content in vast quantities.

ACCORDING to Tech Target, “artificial intelligence ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence (AI) technology.  As AI has become integral to products and services, organisations are starting to develop AI codes of ethics”.

It is critical that Southern Africa and Africa observe the moral principles of AI and shun using AI to propagate stereotypes, promote misinformation and disinformation, among other unethical behaviour generated via the wrong usage of AI.

It may also be proper to try and unpack the two: Misinformation refers to false or inaccurate information that is unintentionally spread, and disinformation refers to information that is intentionally spread with the purpose of deceiving or manipulating others.

This can be enabled by generative artificial intelligence tools that allow anyone to quickly and easily create massive amounts of fake content. Generative artificial intelligence adds a new dimension to the problem of disinformation.

Freely available and largely unregulated tools make it possible for anyone to generate false information and fake content in vast quantities.

These include imitating the voices of real people and creating photos and videos that are indistinguishable from real ones, said the DW Akademie in an article titled Tackling Disinformation: A learning guide.

The other tool that is used to generate misinformation and disinformation is chatbot.

Whilst not all chatbots are made of artificial intelligence, the modern ones increasingly use conversational AI techniques, such as natural language processing (NPL) to understand user and automate responses to them.

However, more practically the difference between conversational AI and chatbot can sometimes be very subtle. Generally, a chatbot focuses on automating human conversations in a more advanced way.

Writing about disinformation in the New York Times under a title Disinformation Research Raise Alarms about AI published on February 8, 2023 and updated June 20, 2023, Tiffany Hsu and Stuart Thomson asserted that: “ChatGPT is far more powerful and sophisticated.

“Supplied with questions loaded with disinformation, it can produce convincing, clean variations on the content en masse within seconds, without disclosing its sources. Microsoft and OpenAI introduced a new Bing search engine and web browser that can use Chatbot technology to plan vacations, translate texts or conduct research’’.

In the same article, Hsu and Thomson observed that: “Predecessors to ChatGPT, which was created by the San Fransisco artificial intelligence company OpenAI, have been used for years to pepper online forums and social media platforms with (often grammatically suspect) comments and spam.

“Microsoft had to halt activity from its Tay Chatbot within 24 hours of introducing it on twitter in 2016 after trolls taught it to spew racist and xenophobic language’’.

Furthermore OpenAI researchers have long been nervous about Chatbot falling into nefarious hands, writing in a 2019 paper of their concerns that its capabilities can aid in malicious pursuit of “monetary gain, a particular political agenda, and/or a desire to create chaos or confusion’’.

AI is a concept referring to computer algorithms that solve problems using techniques associated with human intelligence: logical reasoning, knowledge representations and recall, language processing, and pattern recognition.

AI is currently used to build a wide variety of applications from customer service chatbots to complex earthquake or crime prediction programmes.

Southern Africa and Africa needs to respect the moral principles of AIand not use tools of AI, such as ChatGPT to peddle harming information on social, political, economic, racial ethnic, academic or any other matter.

AI in Southern Africa must be used as a weapon to create credible narratives with a potential to change the lives of the people for the better than engendering confusion and chaos.

One evil of AI is algorithmic bias, a problem issue prevalent in the usage of AI.

To demonstrate this algorithmic bias, we inputted a prompt into Piccaso an AI art generator from freepik asking it to generate images of African children playing outside. The response was playing in a remote area barefoot.

The houses in the vicinity area had badly thatched roofs and were in a near dilapidation state. On the contrary, if the prompt is lightly altered to request an image of European Children playing outside, the responses produced images of children.

The setting was that of an urban area, which was illustrated by the types of buildings. To this end, African and Southern African communities need to guard against the effects of algorithmic bias given how AI is coded especially on African experiences.

The algorithm bias is explained in the ideological bias depicting the African situation as generally underdeveloped as proved by the children playing outside barefoot and the surrounding showing miserable-looking thatched houses.

The Africans need to work hard enough to offer engineering solutions to the issue of AI algorithm codes as a strategy to invent algorithms with codes that do not promote biases when information is requested.

Let the algorithm codes be programmed in such a way that gives a holistic picture of any race life circumstances, from children playing and houses in the surrounding vicinities.

Africa surely has affluent settlements in some places and cannot be shown as only having badly thatched houses that are in state of near dilapidation. It is also a fact that even in highly developed settlements in the north all that glitters is not gold as they too have slums in other places

The Insider Monkey Blog observed that: “Most of the slum dwellers in Romania belong to the Roma population, with many of them burning trash for a living. Around 14,4% of the country’s urban population lives in slums, making it one of the countries with the largest slum population. In 2018, the urban population living in slums was 1 523 000 against the urban population of 10 515 554 in the same year.”

The challenge with Africa and Southern Africa in particular is that “ethics of artificial intelligence” is a popular topic. But Africa is usually not on the radar when it comes to academic discussions about AI ethics and AI policy, not even when it comes to “global” and intercultural approaches.

This “forgetting’’ is likely due to biases and stereotypes about Africa on the part of Western interlocutors.

To those who believe that Africa has little to do with high tech and innovation, a title such as “Responsible AI in Africa” sounds almost like an oxymoron, and at best comes across as ‘a marginal topic’. This was noted in the book Chapter titled: Responsible AI in Africa: Challenges and Opportunities by Chinasa Okolo, Kechinde Arujeba, and George Obaido.

  • Mabhachi is a freelance journalist and wireless technologies and dynamic spectrum access activist. Sibanda is a researcher and digital communications consultant.

Related Topics

Originally Appeared Here