As cyberthreats grow, will generative AI come to the rescue?

Futurist Shelly Palmer marks Nov. 30, 2022, as the date when AI went from knowledge gathering to solution generation. Coincidentally, it’s when OpenAI released ChatGPT to the public. Palmer calls this the “C/G Boundary,” or Curation/Generation AI Boundary, and it “marks the transition from a time when AI was predominantly used to curate and organize preexisting content to a period characterized by AI’s ability to autonomously generate content and solutions.” 

Cyberthreats have also been on a long run of growth since then. “AI in the hands of hackers and other bad actors has given those parties an expanded range of options to perpetrate harmful and costly scams and cyberattacks,” writes Stefanini Vice President Fabio Caversan. 

There’s not much concrete proof that hackers are using AI-specific tools in their attacks. “Thus far, we haven’t seen generative AI used extensively in real-world attacks,” wrote SURge founder Ryan Kovar and Splunk Chief Technology Officer Kirsty Paine, in an October 2023 blog post. But, the resulting work of hacks in the wild offers anecdotal evidence of generative AI being tapped in hacks for its efficiency and fine-tuning, in a growing variety of distributed denial of service attacks, ransomware and phishing scams. If a benefit of generative AI is that it can automate repetitive tasks, it only makes sense that it would give hackers a leg up in crafting phishing emails in bulk to appear less amateurish and near-identical to legitimate ones

The threats keep coming, creatively

Generative AI threats are getting even more creative. Years ago, it would take nearly a platoon of hackers to create a full complement of deepfakes for nefarious purposes, but video and audio AI tools have emerged with improved human realism. A recent example involves a still-unknown group that created deepfakes of an architecture firm’s CEO and other principals holding a corporate meeting that convinced an employee to transfer $25 million to an external account.

AI is being used to tidy up grammar and typos, but also generating emails that seem to originate internally, even sporting fake corporate email addresses, and then targeting swaths of employees. The FBI considers these newer, more sophisticated business email compromises to be the “most financially damaging” of online crimes.  

Fintechs are particularly susceptible to what’s known as account takeover attacks, with monetary gain the primary goal. Social engineering or straight-up buying compromised information on the dark web are among some of the methods for gaining entry. A recent  Abnormal Security report says that ATO attacks are increasing, with 83% of survey participants saying their organization has experienced an incident.

Developers have no immunity to the creativity of attacks. Github is ripe with AI-savvy hackers using the very tools in the repository to build tools to infiltrate and spread compromised payloads.   

Generative AI scrambles to the rescue

While threats continue to grow in creativity, sophistication and vectors, cybersecurity companies are working with those outside the security sector in a rising-tide-lifts-all-boats synergy. A recent Check Point survey shows 61% of chief information security officers are already deep into the exploration of AI-based solutions, and a number of companies have been elbow-deep in generative AI-based security countermeasures: 

  • Nvidia in March announced its AI-based NIM security runtime reference architecture can be used to create microservices tailored to specific services, including security. It also announced partnerships to embed its microservices into security offerings from Trend Micro, Crowdstrike, and Microsoft, to name a few. 
  • Microsoft is tackling cybersecurity on the data governance side, creating AI-based security hooks into its Purview data governance services. It’s also developed an AI security-enabled version of its Copilot AI chatbot
  • Google’s Gemini Pro LLM is the centerpiece of the company’s new Threat Intelligence, which is being used to reverse engineer malware at breakneck speed, using  natural language processing for threat assessments and reporting.
  • Cisco Networks has a slew of solutions as well, but one interesting solution takes the small language model route with the ThousandEyes platform, aimed at companies who don’t have massive fortunes to throw at security issues. ThousandEyes has been AI-focused well before Cisco acquired the platform in 2020 and absorbed its tooling, with AI being integral to the detection, remediation and optimization security cycle.

This is just the beginning. We should see more generative AI solutions down the cybersecurity pipeline focused on areas like anomaly detection using generative models, improved malware detection through the creation of generative adversarial networks, creating real and synthetic datasets specifically for training LLMs and SLMs for cybersecurity issues, and others. 

Recent related stories:

If you liked this article, sign up to receive one of SmartBrief’s Technology newsletters. They are among SmartBrief’s more than 250 industry-focused newsletters.

Originally Appeared Here

Author: Rayne Chancer