Why social science is the foundation of ethical AI

Why social science is the foundation of ethical AI

Professor and researcher in AI and civil law Cecilia Danesi discusses the changing face of AI and the importance of proper governance.

When you think about AI and its many capabilities, it can be difficult to conceptualise an area of modern-day life that hasn’t in some way been affected by artificial intelligence. From school and work, to politics and even healthcare, AI is certainly not a passing fad, which is why proper regulation is of high importance. 

For Cecilia Danesi, a professor and researcher in AI and civil law at the University of Salamanca, the increased governance of AI is crucial if we are to advance communities and prevent further widening of social disparities. 

“AI needs to be regulated and it’s important for countries to establish a common ground on ethical governance. The European Union has published the AI Act and this is a clear example of a balance among innovation and protection,” she said.

“Nowadays there is no doubt that we need AI regulation. The idea is to ensure that AI is going to improve our societies and it’s not going to increase the social gaps.”

AI and social science

Danesi earned her PhD in artificial intelligence from the University of Perugia, has authored a book on the impact of AI on society and is the director of a master’s degree on the ethical governance of AI, but when she first started out in the AI sphere, she explained there was virtually no one working at the cross-section of AI and the social sciences. 

“For that reason, when I started, it was very difficult because there was no information and it was not clear what the relationship between AI and social sciences was. Nowadays, it is easier to identify why we need social science professionals working on AI,” she explained. 

She noted the change in the realm of AI is part of why it is such a fascinating and dynamic sector, stating the continuous growth aspect makes it both challenging and enriching. 

“For example, when I started to work with AI, generative AI was not at the centre of the debate. After the launch of ChatGPT in 2022, generative AI became the hot topic. Every day there are new frontiers to explore and study,” she said. 

There are many benefits to AI, such as minimising unsafe or mundane tasks, automating difficult work and enabling data-driven decision-making. However, as acknowledged by Danesi, AI can pose a danger to human beings, particularly through algorithmic bias and AI hallucinations. 

“Those risks can be especially dangerous for vulnerable and underrepresented groups. So for that reason, it’s very important to analyse and to work on this intersection between AI and human rights,” she said. 

Using the example of social media, she urged people to give greater consideration to how algorithms control the kind of content that we are regularly exposed to and how often certain groups of people, typically in diverse categories, are largely excluded.

“If we talk about stereotypes, we can easily see that beauty filters are a clear example of the reinforcement of stereotypes. The same happens with racial bias for example, normally social media algorithms do not show diversity, all the faces and bodies that we see are homogenous.”

It isn’t AI versus humanity

Danesi firmly believes that we can not afford to eliminate the human touch element in our dealings with AI, saying that we have to guarantee its continuation in the development of technology. 

As she sees it, the only way to eradicate AI bias and eliminate blindspots is to ensure that AI is exposed to a full spectrum of humanity. “When we are talking about human beings we are not talking about one race, one colour, or one religion, we are talking about diversity,” said Danesi.

For Danesi, education is key and focus should be given to the negative and positive effects of AI. She explained she regards the section on AI literacy in the EU AI Act as a ‘diamond’, as it provides an important overview of literacy expectations in regard to AI usage. 

“Technical professionals, such as programmers and engineers, that are working on AI have no training in AI ethics. AI literacy is very important in both cases for the people who are working in AI, such as developers, programmers etc, but also for users,” she explained. 

Earlier this summer, Danesi spoke at the 2024 European Leadership Academy’s Summer School for Female Leadership, where she addressed 29 young women from the 27 EU member states, the Western Balkans and Ukraine, on the topic of AI ethics. 

“We have to realise that these women are the future, not just the future of AI. [They are] the future in a lot of disciplines, the future of our work,” she said, adding that it is crucial that they be given the knowledge to use advanced technology in a safe and ethical way. She emphasised that this ethical governance of AI is not exclusively beneficial for those working in STEM fields.

“AI is everywhere, so we can implement it in all the sectors and disciplines. Everybody is using AI.” 

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts. 

Originally Appeared Here