Deepfake fraud has proven its potential to cost businesses millions of dollars. But the deepfake dread reverberating through tech circles goes beyond the pocketbook. With their ability to co-opt someone’s identity without consent, to deceive and to disrupt economic and political processes, deepfakes are troubling on an existential level. Put simply, they’re creepy.
And according to security leaders monitoring the deepfake landscape, they’re only going to get creepier. The technology is developing exponentially and new identity fraud tactics are emerging at speed. Generative AI, which many executives had hoped might be a passing fad, has morphed into an increasingly common threat. And while AI-based deepfake detection tools are available, they aren’t guaranteed to have implemented governance measures around client data that align with corporate policies.
IDology report shows widespread concern about generative AI
New data from IDology shines a light on the “industrial scale” of fraud being perpetrated with synthetic identities created using generative AI. A release touting the research says 45 percent of fintechs reported increased synthetic identity fraud in the last 12 months, and fully half are concerned that GenAI will create more convincing synthetic identities and better deepfakes.
Per the release, “GenAI has given criminals a path to work faster, scale attacks, and create more believable phishing scams and synthetic identities.” And it is only the beginning: businesses see generative AI-driven attacks as the dominant fraud trend over the next 3-5 years.
The response from IDology is a familiar rallying cry: use AI to fight AI.
“These numbers indicate a need for action,” says James Bruni, Managing Director at GBG IDology. “While Gen AI is being used to escalate fraud tactics, its ability to quickly scrutinize vast volumes of data can also be a boon for fintechs, allowing them to fast-track trusted identities and escalate those that are high-risk. The powerful combination of AI, human fraud expertise and cross-sector industry collaboration will help fintechs verify customers in real-time, authenticate their identities and monitor transactions across the enterprise and beyond to protect against difficult-to-detect types of fraud, such as synthetic identity fraud.”
FS-ISAC proposes deepfake threat taxonomy
The deepfake drum continues beating with the release of a new report from the Financial Services Information Sharing and Analysis Center (FS-ISAC), an industry consortium dedicated to reducing cyber-risk in the global financial sector. Prepared by FS-ISAC’s Artificial Intelligence Risk Working Group, “Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks” outlines broad categories and “a common language of deepfake threats and controls to counter them.”
Like any industry, financial services brings its own specific context for deepfake fraud. One of the most feared new techniques is deepfake CEO video fraud, or, more generally, “C-suite impersonation.” Customer biometrics are a target and banks are their own goldmine for fraudsters committing consumer fraud, often perpetrated through voice authentication systems. Infrastructure can be attacked, and deepfake detection models themselves are often in the crosshairs.
The risks are various: destabilized markets, costly data breaches, humiliation leading to reputational damage.
The meat of IDology’s paper is its Deepfake Threat Taxonomy, which breaks down threats to organizations by category. “The FS-ISAC Deepfake Taxonomy covers two topics,” says the paper. “The six threats that financial services firms face from deepfakes” and “three primary attack vectors targeting the technologies that detect and prevent deepfakes.” Each defined category has a number of sub-categories, which together offer a broad view of the overall deepfake fraud ecosystem.
“Understanding the different types of threats posed by deepfakes and how they can be taxonomized clarifies the types of controls most suitable to defense,” the paper says. “Financial services institutions should perform a complete threat modeling for each of the threat categories.” A corresponding table of control mechanisms completes the mosaic.
The fight against deepfakes, says FS-ISAC, will need to be collaborative, vigilant and nimble. “While the threat posed by deepfakes to financial institutions is significant and evolving,
a proactive, multi-faceted approach to security can substantially mitigate these risks. The path forward lies in the continuous improvement of detection technologies, coupled with robust security practices and comprehensive awareness programs.”
Advanced deepfake fraud soon to fool everyone’s moms
An article from Fortune.com solicits opinions on the deepfake threat from cyber chiefs at SoftBank, Mastercard and Anthropic – and the diagnosis is grim, suggesting we have entered an “AI cold war.”
“You’ve got the criminal entities moving very quickly, using AI to come up with new types of threats and methodologies to make money,” says Gary Gary Hayslip, chief security officer at investment holding company SoftBank. “That, in turn, pushes back on us with the breaches and the incidents that we have, which pushes us to develop new technologies.”
“In a way it’s like a tidal wave,” Hislip says of the rate at which new AI technologies are spilling into the market.
Fraud detection is also improving, but companies have concerns about what third-party AI vendors are allowed to do with data they collect. Hislip says you “have to be a little paranoid” in assessing which tools and services get integrated into a company’s security ecosystem. Some products will bring an unacceptable risk, especially in highly-regulated industries like healthcare.
Meanwhile, Alissa Abdullah, deputy CSO at Mastercard, says deepfake scams are getting better and more varied. She describes an emerging attack technique in which AI video and audio deepfakes present as strangers from a trusted brand, such as a help desk representative.
“They will call you and say, ‘we need to authenticate you into our system,’ and ask for $20 to remove the ‘fraud alert’ that was on my account,” Abdullah says. “No longer is it wanting $20 billion in Bitcoin, but $20 from 1000 people – small amounts that even people like my mother would be happy to say ‘let me just give it to you.’”
Article Topics
biometrics | deepfake detection | deepfakes | financial services | fraud prevention | IDology | synthetic data | synthetic identity fraud
Latest Biometrics News
Oct 30, 2024, 12:25 pm EDT
The development of the UK’s digital ID ecosystem is finally picking up some pace. The UK government has officially launched…
Oct 30, 2024, 11:50 am EDT
Japan has had a bumpy road with its My Number personal identification card, but despite this has added another widespread…
Oct 30, 2024, 11:39 am EDT
The leader of Sri Lanka has given a timeframe for when he expects the national digital ID to be implemented….
Oct 30, 2024, 11:06 am EDT
The U.S. Department of Housing and Urban Development’s (HUD) Personnel Security Division (PSD) announced that is conducting market research to…
Oct 30, 2024, 10:40 am EDT
Korean digital identity authentication startup Hopae has attracted 6 billion won (US$4.3m) worth of seed investment, according to a report…
Oct 29, 2024, 9:06 pm EDT
Digitization of government systems and payments to promote financial inclusion and economic growth is on the agenda for Visa as…