Risks, Realities, and Lessons for Businesses

Risks, Realities, and Lessons for Businesses

In 2024, a year proclaimed as the “Year of the Election,” voters in countries representing over half the world’s population headed to the polls. This massive electoral wave coincided with the rising prominence of Generative AI (GenAI), sparking debates about its potential impact on election integrity and public perception. Businesses, just like political players, are also facing a new landscape where GenAI can be both a risk and an opportunity.

GenAI’s ability to produce highly sophisticated and convincing content at a fraction of the previous cost has raised fears that it could amplify misinformation. The dissemination of fake audio, images and text could reshape how voters perceive candidates and parties. Businesses, too, face challenges in managing their reputations and navigating this new terrain of manipulated content.

The Explosion of GenAI In 2024

Conversations from across eight major social media platforms, online messaging forums and blog sites about GenAI surged by 452% in the first eight months of 2024 compared to the same period in 2023, according to sourced data from Brandwatch. Many expected 2024 to be the year that deepfakes and other GenAI-driven misinformation would wreak havoc in global elections.

However, reality proved to be more nuanced than these initial concerns. While deepfake videos and images did gain some traction, it was the more conventional forms of AI-generated content, such as text and audio, which appear to have posed greater challenges. AI-generated text and audio appear to have been harder to detect, more believable, and cheaper to produce than deepfake images and videos.

The ‘Liar’s Dividend’ and the Challenge for Truth

One of the significant concerns that emerged with GenAI is what has been coined the “Liar’s Dividend.” This refers to the increasing difficulty in convincing people of the truth as belief in the widespread prevalence of fake content grows.

It is a “Liar’s Dividend” because it allows people to lie about things that have really happened, explaining away evidence as fabricated content. Worryingly, in politically polarized countries like the United States, the Liar’s Dividend could make it even harder for politicians and their supporters to agree on basic facts.

For businesses, this phenomenon also poses serious risks. If a company faces accusations, even presenting real evidence to refute them might not be enough to convince the public that the claims are false. As people become more skeptical of all content, it becomes harder for companies to manage their reputations effectively.

What Have We Learned So Far?

Despite early concerns, 2024 has not yet seen the dramatic escalation of GenAI manipulation in elections that many feared. Several factors have contributed to this:

  • Public Awareness: The public’s ability to detect and call out GenAI-generated content has improved significantly. Regulators, fact-checking organizations, and mainstream media have been proactive in flagging misleading content, contributing to a reduction in its impact.
  • Regulatory Readiness: Many countries have introduced regulations to address the misuse of GenAI in elections. Media outlets and social media platforms have also adopted stricter policies to combat misinformation, reducing the spread of AI-manipulated content.
  • Quality Limitations: The production quality of some GenAI-generated content has not met the high expectations that many commentators had feared. This has made it easier to identify and call out fake content before it can go viral.

However, there have still been notable instances of GenAI manipulation during the 2024 election cycle:

  • France: Deepfake videos of Marine Le Pen and her niece Marion Maréchal circulated on social media, leading to significant public debate before being revealed as fake.
  • India: GenAI-generated content was used to stir sectarian tensions and undermine the integrity of the electoral process.
  • United States: There were instances of GenAI being used to create fake audio clips mimicking Joe Biden and Kamala Harris, causing confusion among voters. One political consultant involved in a GenAI-based robocall scheme now faces criminal charges.

Exploiting Misinformation

For businesses, the lessons from political GenAI misuse are clear: the “Liar’s Dividend” is a real threat, and companies must be prepared to counter misinformation and protect their reputations. As more people become aware of how easily content can be manipulated, they may become increasingly skeptical of what they see and hear. For businesses, this can make managing crises, responding to accusations, and protecting brand credibility even more challenging.

At the same time, proving a negative — something did not happen — has always been difficult. In a world where GenAI can be used to create false evidence, this challenge is magnified. Companies need to anticipate this by building robust crisis management plans and communication strategies.

Positive Uses of GenAI

While much of the discussion around GenAI focuses on its negative aspects, there are positive applications as well, especially in political campaigns, which offer lessons for businesses:

  • South Korea: AI avatars were used in political campaigns to engage younger voters, showcasing the technology’s potential for personalized and innovative voter interaction.
  • India: Deepfake videos of deceased politicians, authorized by their respective parties, were used to connect with voters across generations, demonstrating a creative way to use GenAI in a positive light.
  • Pakistan: The Pakistan Tehreek-e-Insaf (PTI) party, led by jailed Prime Minister Imran Khan, effectively used an AI-generated victory speech after their surprising electoral win. The video received millions of views and resonated with voters, demonstrating GenAI’s ability to amplify campaign messages in powerful ways.

Looking Ahead: GenAI’s Role In Crisis Management

For businesses, the key takeaway from the 2024 election cycle is the importance of planning for the risks posed by GenAI. While the technology has not yet fundamentally reshaped the information environment, it’s potential to do so remains. Companies must be proactive in addressing the risks posed by AI-generated misinformation and developing strategies to separate truth from falsehood.

At the same time, businesses should also explore the positive uses of GenAI to engage with their audiences in creative ways, much like political campaigns have done. As technology evolves, firms that are able to harness its potential while mitigating its risks will be better positioned to navigate the complexities of the modern information landscape.

Joshua Tucker is a Senior Geopolitical Risk Advisor at Kroll, leveraging over 20 years of experience in comparative politics with a focus on mass politics, including elections, voting, partisan attachment, public opinion formation, and political protest. He is a Professor of Politics at New York University (NYU), where he is also an affiliated Professor of Russian and Slavic Studies and Data Science. He directs the Jordan Center for Advanced Study of Russia and co-directs the Center for Social Media Politics at NYU, and his current research explores the intersection of social media and politics, covering topics such as partisan echo chambers, online hate speech, disinformation, false news, propaganda, the effects of social media on political knowledge and polarization, online networks and protest, the impact of social media algorithms, authoritarian regimes’ responses to online opposition, and Russian bots and trolls.

George Vlasto is the Head of Trust and Safety at Resolver, a Kroll business. Resolver works with some of the world’s leading social media companies, Generative AI model-makers and global businesses to identify and mitigate harmful content online. George leverages a 15-year career as a diplomat for the UK government, working in a range of locations around the world, to bring a global perspective to the subject of online harms. He has a deep knowledge of online and offline risk intelligence and extensive experience in bringing insight from these domains together to understand the real-world impact for businesses, online platforms and society.

This article appeared in Cybersecurity Law & Strategy, an ALM publication for privacy and security professionals, Chief Information Security Officers, Chief Information Officers, Chief Technology Officers, Corporate Counsel, Internet and Tech Practitioners, In-House Counsel. Visit the website to learn more.

Originally Appeared Here