Generative AI turns identity theft into an industrial-scale operation

Generative AI turns identity theft into an industrial-scale operation


A Bloomberg investigation shows how generative AI and autonomous agents are supercharging identity theft in the US, from social security number lookups on the darknet to deepfake driver’s licenses.

Bloomberg reporter Jennah Haque received a welcome package from the Ultimate Medical Academy in Tampa a few months ago, complete with an XL men’s polo shirt. She had never applied. Someone had submitted 13 college applications and multiple financial aid requests in her name, potentially unlocking more than $50,000 in student loans. The name, date of birth, address, and social security number all matched perfectly. The only giveaway: a high school in Alabama she had never set foot in.

Haque’s case is far from unique, as her own reporting reveals. Identity theft powered by generative AI has reached a new level, just as experts feared. Numbers from the Identity Theft Resource Center (ITRC) for 2025 show the highest number of data compromises since tracking began in 2005. Specialized AI tools and deepfakes are now a regular part of the playbook.

Michael Bruemmer, Vice President of Consumer Protection at Experian—one of the three major US credit bureaus—says 40 percent of the 5,000 data breaches his team handled for affected companies last year involved AI. For 2026, Experian expects agentic AI to become the primary driver.

Agentic systems chain multiple steps together automatically

The tools are already mature, according to Bloomberg. Models like FraudGPT, a language model trained on breach data, can test hundreds of thousands of social security numbers in minutes until they find a valid combination tied to an account with little activity.

Some sub-agents scour the darknet for usable personal data, others simultaneously contact multiple banks under different identities, and still others automatically fill out complex government forms for credit applications. A US financial aid employee told Haque that the sheer volume of college applications submitted in such a short window would be nearly impossible without AI assistance.

Naureen Ali, US Head of Fraud at TransUnion, describes a common bust-out scheme: fraudsters first open small credit lines at local banks, then larger ones at institutional lenders. They submit deepfake driver’s licenses as physical ID verification, then max out the cards and accounts. Ali puts annual global fraud losses at more than $534 billion, though she doesn’t break out the AI-driven share.

Bruemmer sums it up simply: AI makes attacks faster, more sophisticated, and more visually convincing. Phishing emails are now nearly impossible for most recipients to spot. Tamas Kadar, CEO of fraud prevention firm SEON, says fraudsters can now build complete phishing websites without writing a single line of code.

The experts Bloomberg spoke with agree that the best defense against this wave is AI itself. TransUnion uses automated liveness checks to detect AI-generated selfies, while SEON analyzes transactions using proprietary risk scores. For individuals, the standard advice still applies: credit freezes, multi-factor authentication, passkeys, and avoiding public Wi-Fi without a VPN.

AI News Without the Hype – Curated by Humans

Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive “AI Radar” frontier report six times a year, full archive access, and access to our comment section.

Subscribe now



Content Curated Originally From Here