Bolstering your cyber defenses in the age of AI

Bolstering your cyber defenses in the age of AI

Ryan Hittner is an audit and assurance principal at Big Four accounting firm Deloitte & Touche LLP. Views are the
author’s own.

The rise of generative artificial intelligence has resulted in powerful, easily accessible, and scalable tools that can be exploited by fraudsters and lead to a broad range of cybersecurity issues, from data leakage to malware, and resulting in various forms of theft.

Earlier this year, CNN.com published an article about a deepfake scam that illustrates an increasingly alarming fraud problem that ought to concern any executive. A finance worker was duped into paying $25 million to a fraudster who succeeded in impersonating the company’s CFO during an AI-enhanced video conference call.

This is an unnerving foreshadowing of what the threat landscape may become.

GenAI’s ability to create deepfakes that look and sound convincing is just one example of its potential usefulness to bad actors. The technology can also be exploited to boost email phishing scams, allowing the criminal to draft emails in the style and syntax of a trusted individual or source. Another malicious use case involves manipulating data or forging documents to support bogus transactions. It’s relatively easy to leverage GenAI to combine one or more of these methods to make it more difficult to prevent or detect fraud.

In May, the FBI’s San Francisco division issued a warning about the “escalating threat” posed by cyber criminals who use AI to “conduct sophisticated phishing/social engineering attacks and voice/video cloning scams.”

To be sure, cybersecurity is a challenge that predates the development of GenAI. But the technology’s rapid evolution has only escalated the potential threats.

Both the persuasiveness and the speed of development of AI-enhanced threats may very well mean that defending against these types of threats is increasingly beyond the capabilities of many traditional risk management protocols. Deloitte’s Center for Financial Services predicts that GenAI could drive fraud losses to $40 billion in the U.S. by 2027, up from $12.3 billion in 2023.

Defensive measures

Many organizations already take cyber threats seriously, but the age of GenAI fraud is a gamechanger across the entire business landscape. Some defensive measures to consider include:

  • Learning the technology. Familiarize yourself with the basics of GenAI, including algorithms, data sources, trends, and techniques, to better understand their strengths. Staying informed on advances in AI technology, especially use cases that are relevant to your industry, may help you better understand potential risks and vulnerabilities and determine what capabilities you may need to improve.
  • Knowing your GenAI vulnerabilities. It’s important to identify which of your security protocols is most likely to be compromised or misled by GenAI-fabricated content — whether it’s voice, video, audio, documents, or something else. A risk identification process can assist in uncovering these vulnerabilities and can include activities like risk hackathons and brainstorming sessions. Examples of areas where organizations may focus include bolstering access and approval processes by including multiple levels of approval and multifactor authentication to verify identities of personnel. Another example is strengthening processes to verify documents from third parties.
  • Conducting regular workforce training. Fraud schemes will likely continue to evolve, and your organizational understanding should aim to keep pace. Both employees and key stakeholders should know how to identify potential GenAI threats, as well as how to appropriately respond to a breach. Consider updating your security processes to include a greater variety of data when evaluating and validating documents, requests, or transactions.
  • Combining organizational expertise. Cyberthreats are now complex enough that an effective defense can often require a multi-disciplinary approach. Consider collaborating with other departments in the organization, such as IT and human resources, so you can comprehensively assess AI-enabled fraud risks and develop your own knowledge and skills in-house.
  • Sharing what you learn. Once bad actors discover a weakness anywhere, other organizations will likely be targeted soon. Sharing your findings can help protect more businesses.

While no threat can be eliminated completely, the likelihood of being impacted by a GenAI-enabled fraud scheme can be reduced. It is as important as ever for organizations to think proactively about their defenses and risk management, and regularly reassess and update their protocols in response to rapidly evolving GenAI-enabled threats.

Originally Appeared Here