As the volume of digital business rises year over year, the potential for AI-enhanced digital fraud increases with it, according to TeleSign.
A new TeleSign report highlights consumer concerns and uncertainty about how AI is being deployed, particularly regarding digital privacy, and emphasizes the need for ethical AI and ML use to combat fraud, hacking, and misinformation (aka “AI for good”). With a record number of voters heading to the polls in 2024, it also explores consumer attitudes about the potential misuse of AI that could undermine election confidence.
“The emergence of AI over the past year has brought the importance of trust in the digital world to the forefront,” said Christophe Van de Weyer, CEO of Telesign. “As AI continues to advance and become more accessible, it is crucial that we prioritize fraud protection solutions powered by AI to protect the integrity of personal and institutional data—AI is the best defense against AI-enabled fraud attacks.”
Voters fear AI-generated content in elections
The rise of AI is magnifying the importance of trust in business. 87% of American believe brands are responsible for protecting users’ digital privacy. Yet, when it comes to their perception of AI’s impact on their digital privacy, there is a surprising level of ambivalence with 44% of US respondents believing AI/ML will have no difference on their susceptibility to digital fraud. This comes amidst a backdrop of rising account takeover attempts and other fraud attacks fueled by generative AI.
Younger people are also more likely (47%) to trust companies that utilize AI or ML to protect against attacks than older people (39%) from fraud.
In a year when more voters than ever before will head to the polls—representing a combined population of about 49% of the people in the world—fear over how AI could impact confidence in elections is high. 72% of voters worldwide fear AI-generated content will undermine upcoming elections in their country.
In the US, which is set to hold its presidential elections this November, 45% of respondents report seeing an AI-generated political ad or message in the last year, while 17% have seen one sometime during the last week.
74% of US respondents agree that they would question the outcome of an election held online. Global average is slightly lower at 70%. Americans are the least likely to trust online election results.
Misinformation undermines trust in election outcomes
What’s more, 75% of US respondents believe misinformation has made election results inherently less trustworthy. In particular, 81% of Americans fear that misinformation from deepfakes and voice clones is negatively affecting the integrity of their elections. Fraud victims are more likely to believe they have been exposed to a deepfake or clone in the past year (21%).
69% of respondents in the US do not believe that they have been recently exposed to deepfake videos or voice clones. Global average increases to 72%.
With the rapid advancement of generative AI fueling alarming fraud trends such as an increase in account takeover attempts, it is essential that businesses use technology like AI for good to stop fraud attempts in their tracks.
Despite advances in detecting and removing deepfakes, distribution of AI-generated content via fake accounts remains a key challenge. A critical way for businesses to stop the spread of fake accounts and deepfakes is to implement secure protocols for proving users are real.