Does your family have a safe word? Why experts say you need one now

Does your family have a safe word? Why experts say you need one now


It is a terrifying new level of fraud that turns your family’s voices against you, and experts in the Capital Region warn that artificial intelligence is making it easier than ever.

According to new data from the Federal Trade Commission, Americans reported losing $12.5 billion to fraud in 2024, with imposter scams ranking as the number one most-reported category.

The premise is simple but chilling: a parent receives a frantic call from a child claiming to be in jail, or a spouse begging for help after a car accident. The voice sounds real—sobbing, hyperventilating, and matching the tone of the loved one perfectly.

However, according to cybersecurity experts at the University at Albany, that voice is likely a deepfake, created by AI using just seconds of audio found online or stolen over the phone.

Tim Fake is a visiting assistant professor of cybersecurity at UAlbany. He acknowledges the irony of his last name, given that he spent time with us demonstrating just how easy it is to manufacture a false reality.

Fake says the technology has evolved rapidly. While early voice cloning required hours of audio to train an algorithm, today’s tools need almost nothing.

“With one of the clips, you can actually hear a person’s breathing patterns,” Fake said. “So it actually is getting really good.”

Fake warns that scammers are now using a technique where they call victims and remain silent, waiting for the victim to say, “Hello?” or “Is anyone there?”

“They’ll record that snippet, then they’ll train the algorithm,” Fake said. That single word can sometimes capture enough vocal data to create a convincing clone.

While phone calls are one method, social media is an even bigger goldmine. George Berg, an associate professor of cybersecurity at UAlbany, warns that generative AI is automating the process of finding victims.

Instead of a human scammer manually searching for targets, AI bots can now scrape TikTok, Instagram, and Facebook profiles to find video clips of your children or family members.

“You can get the gen AI to go out, find a person with a lot of stuff on TikTok, scrape their voice, scrape their video… so you can automate it,” Berg said.

The primary weapon scammers use is not technology—it’s urgency. They simulate a crisis to make you panic, bypassing your critical thinking skills.

Professor Berg advises using a strategy called “Take Nine.”

“Before you do anything, before you click on a link… you literally take nine,” Berg said. “Anything can wait nine seconds.”

Taking those nine seconds allows you to compose yourself and verify the story.

Experts say the most effective defense against AI voice cloning is low-tech: a conversation with your family. Berg suggests establishing a “safe word” or a specific family phrase that an AI bot wouldn’t know.

“Maybe a quote from a family movie that everyone loves,” Berg suggested. “Just something that if your grandson calls, you can verify that it’s him by the quote from Moana or something.”

If you receive a suspicious call from a loved one demanding money:

Hang up immediately.

Call the person back on their known, saved phone number.

Ask for the safe word. If the caller gets angry or can’t say it, it is a scam.



Content Curated Originally From Here