Banks must be wary of AI security risks, regulator says

Banks must be wary of AI security risks, regulator says

The New York State Department of Financial Services issued new guidance Wednesday advising financial firms to assess and address cybersecurity risks arising from the use of artificial intelligence.

The guidance, which imposes no new rules or regulations, highlights four AI-related risks that financial sector workers need to be aware of, including social engineering, cyberattacks, theft of nonpublic information, and increased vulnerabilities due to supply chain dependencies.

The regulator warned of the use of deepfakes – AI-generated videos, photos or audio recordings – which are sometimes created in an attempt to “convince employees to divulge sensitive information about themselves and their employers. When deepfakes result in the sharing of credentials, threat actors are able to gain access to Information Systems containing Nonpublic Information.”

“AI-driven social engineering attacks have led to employees taking unauthorized actions, such as wiring substantial amounts of funds to fraudulent accounts,” the regulator cautioned.

The regulator also warned that the prevalence of AI has increased the speed and scale of cyberattacks and given unsophisticated bad actors a “lower barrier to entry” to access sensitive NPI. Financial firms have a large volume of sensitive NPI, the NYDFS noted, making the sector particularly attractive to bad actors.

“I think it’s really about making sure there’s expertise in the institution, making sure they’re engaging with lots of stakeholders, so they understand the development of the technology,” NYDFS Superintendent Adrienne Harris said in an interview with the Wall Street Journal on the guidance’s release.

“It’s about making sure that you’ve got the right expertise in-house — or that you’re otherwise seeking it through external parties — to make sure your institution is equipped to deal with the risk presented,” she said.

Financial firms should have layers of cybersecurity controls that offer overlapping protections so that if one layer fails, there aren’t areas left unprotected, the NYDFS wrote in the guidance. Risk assessment and risk-based programs, policies, procedures and plans should be included in cybersecurity controls, as well as proper management of third-party vendors and access controls, including multi-factor authentication.

Cybersecurity training should also play a role in protecting NPI, educating all personnel on the risks of AI, NYDFS said in the guidance.

The guidance was issued in response to inquiries about how AI is changing cyber risk and how such issues can be mitigated. It comes a year after another regulator, Federal Reserve Vice Chair for Supervision Michael Barr, warned that the AI explosion had created a looming cybersecurity risk for the financial services industry.

Since then, a number of firms have fallen victim to cybersecurity breaches, though it’s unclear how many of them involved the use of AI.

“AI has improved the ability for businesses to enhance threat detection and incident response strategies, while concurrently creating new opportunities for cybercriminals to commit crimes at greater scale and speed,” Harris said in a prepared statement.

“New York will continue to ensure that as AI-enabled tools become more prolific, security standards remain rigorous to safeguard critical data, while allowing the flexibility needed to address diverse risk profiles in an ever-changing digital landscape,” she said.

Originally Appeared Here