Santander Deploys Deepfakes To Raise Awareness Of AI Scam Risks, With Half Of Brits Unaware Or Confused By The Emerging Threat

Santander Deploys Deepfakes To Raise Awareness Of AI Scam Risks, With Half Of Brits Unaware Or Confused By The Emerging Threat

Santander Deploys Deepfakes To Raise Awareness Of AI Scam Risks, With Half Of Brits Unaware Or Confused By The Emerging Threat

Santander has teamed up with ‘finfluencer’ Mr Money Jar to deliver a stark warning against falling victim to AI deepfake scams, an up-and-coming tactic deployed by fraudsters.

As part of the initiative, Santander has created deepfake videos of Mr Money Jar and Santander fraud lead Chris Ainsley, to show just how realistic deepfakes already are, and how Brits can best protect themselves. The videos will be available on social media to raise awareness.

A deepfake is a video, sound, or image of a real person that has been digitally manipulated through artificial intelligence (AI), to convincingly misrepresent an individual or organisation. With deepfake generators and software widely available, fraudsters simply require authentic footage or audio of their intended victim – often found online or through social media – to create their deepfake.

New research from Santander shows over half of Brits (53%) have either not heard of the term deepfake, or misunderstood what it meant, with just 17% of people confident they could easily identify a deepfake video.

The data also shows that many have previously come across a deepfake, with over a third (36%) of Brits already knowingly watched a deepfake. Many of which have been seen on social media, with 28% of respondents reported having seen a deepfake on Facebook, followed by 26% on X (formerly Twitter), 23% on TikTok, and 22% on Instagram.

Chris Ainsley, Head of Fraud Risk Management at Santander said: “Generative AI is developing at breakneck speed, and we know it’s ‘when’ rather than ‘if’ we start to see an influx of scams with deepfakes lurking behind them. We already know fraudsters flood social media with fake investment opportunities and bogus love interests, and unfortunately, it’s highly likely that deepfakes will begin to be used to create even more convincing scams of these types.

“More than ever, be on your guard and just because something might appear legitimate at first sight – doesn’t mean it is. Look out for those telltale signs and if something – or someone – appears too good to be true, it’s probably just that.”

Data published today also reveals:

  • Brits’ top concern when it comes to deepfake technology is that it will be used by to steal people’s money (54%) – ahead of other concerns, such as manipulation in elections (46%) and the generation of fake biometric data (43%).
  • Four in five people (78%) expect fraudsters to use the technology and six in 10 people (59%) say they are already more suspicious of things they see or hear because of deepfakes.

Online ‘finfluencer’ Timi Merriman-Johnson (@mrmoneyjar) said: “The rate at which generative AI is developing is equal parts fascinating and terrifying. It is already very difficult to spot the difference between deepfake videos and ‘real’ ones, and this technology will only get better from this point forward. This is why it’s very important for users to be aware of the ways in which fraudsters use technology like this to scam people.

“As I said in the video, if something sounds too good to be true, it probably is. People don’t tend to broadcast lucrative investment opportunities on the internet. If you are ever in doubt as to whether a company or individual is legitimate, you can always search for them on the Financial Conduct Authority Register.”

Originally Appeared Here