Compliance with anti-money laundering (AML) and Know Your Customer (KYC) laws and regulations is often a foremost concern for digital assets companies. Despite efforts to monitor and detect fraud, digital asset companies face significant challenges due to the advances in technological threats, including generative AI.
Gen AI can produce highly realistic deepfakes, false documentation, and can nearly instantaneously weave together a compelling life story to support the fake information, for example, by seeding a person’s social media accounts with false posts. For instance, scammers recently used deepfake technology to simulate a video conference that included a multi-national financial company’s CFO and other executives, and tricked a worker into transferring almost $26 million to the scammers.
In short, the ability of gen AI to nearly instantaneously produce compelling deepfake audio and visuals can wreak havoc on existing governance systems designed to protect consumers.
Current KYC Mechanisms Are Insufficient With The Advancement Of Gen AI
AML and KYC programs are essential for financial institutions to verify the identities of their customers and ensure compliance with the laws designed to fight money laundering, fraud, and terrorism financing. However, many crypto companies have weak or porous KYC controls, leading to increased fraud risk. According to Coindesk, crypto users lost almost $4 billion to “scams, rug pulls and hacks” in 2022 and around $2 billion in 2023.
Since digital assets companies generally do not have physical locations like traditional financial institutions, they need to utilize KYC methods adapted for a remote environment. Commonly used KYC verification methods include:
- Taking a selfie while holding a handwritten sign with the current date;
- Snapping a photo of the user’s driver’s license or other government ID; or
- Recording a live video answering security questions to confirm user identity and “liveness.”
However, gen AI can bypass these current verification methods. For example, services like OnlyFake use AI to create fake IDs that have purportedly passed stringent KYC checks on major cryptocurrency exchanges like Binance and Coinbase. These fake IDs are generated using neural networks and can be purchased for as little as $15. Deepfake Offensive Toolkit or dot creates deepfakes for virtual camera injection and allows users to swap their face with an AI generated face to pass identity verification. According to this article from the Verge, financial institutions’ KYC identity verification tests, which usually require a user to look into their phone or laptop camera, are easily tricked by the deepfakes generated by dot.
Utilizing Gen AI In Combination With Blockchain Can Mitigate Fraud Enabled By Gen AI
Blockchain and AI are complementary technologies that can be effective for fraud detection and investigation, both independently and when combined.
Blockchain For Verification
Decentralization, immutability and rule-based consensus are some of the core features of blockchain technology that make it useful for identity verification and fraud detection. For example, transactions written to the blockchain are immutable (e.g. data cannot be deleted or modified), which can prevent would-be fraudsters from altering transaction data. Furthermore, transactions written to public blockchains, like the Bitcoin blockchain, are completely searchable and transparent, making it difficult for fraudulent activities to go undetected. Blockchains are also distributed by nature, making it more difficult for a single entity or a small group of entities to make unauthorized changes to data on the blockchain. Finally, data on blockchains can be cryptographically hashed, generating a unique digital fingerprint that is nearly impossible to recreate. This feature helps track fraudulent transactions because if anyone tampers with the data on the blockchain, the hash value would also change.
AI For Detection:
AI can enhance fraud detection by analyzing user behavior patterns and identifying anomalies in real time. In contrast to blockchain technology, which is useful for auditing past transactions, AI can learn and adapt to potentially fraudulent behaviors in real-time. For example, advanced AI detection algorithms can analyze user behavior patterns and identify anomalies, flagging suspicious activities that deviate from normal usage. AI can quickly sift through mountains of data and identify subtle inconsistencies that often escape human detection. Machine learning models and AI-driven behavioral analysis enable AI to analyze user interactions like mouse movement patterns and typing style, which can add an extra layer of identity verification on top of the blockchain. The ability of AI to proactively monitor and detect fraud and the ability of blockchain to authenticate user identity and validity of transactions is a powerful combination.
There Is Urgency To Develop Solutions As Gen AI Advances
Crypto-related cybercrime is only growing as AI deepfakes come more believable and realistic. However, in the face of this growing threat, several start-ups have developed AI-centric blockchain tools to fight fraud and other illicit activities in the digital assets industry.
For example, BlockTrace and AnChain.AI are two companies leveraging the synergies of blockchain and AI technology to fight crypto-related crime. BlockTrace’s, whose mission is to assist governments and private enterprises combat cryptocurrency-related financial crime, recently partnered with AnChain.AI, a company that uses AI capabilities to fight fraud, scams and financial crimes in digital assets. BlockTrace and AnChain.AI will deliver solutions to allow national security agencies to use AI to investigate smart contracts, conduct intelligence on blockchain transactions, and provide cybersecurity insights to national security officials.
The industry is just at the cusp of fully utilizing the potential of AI and blockchain to combat fraud enabled by AI, and many more developments are yet to come given the breakneck speed at which AI is advancing.