You’ve probably seen the word “deepfakes” in the news lately, but are you confident you would be able to spot the difference between real and artificial intelligence-generated content? During the summer, a video of Vice President Kamala Harris saying that she was “the ultimate diversity hire” and “knew nothing about running the country” circulated on social media. Elon Musk, the owner of X, retweeted it. This was, in fact, a deepfake video.By posting it, Musk seemingly ignored X’s own misinformation policies and shared it with his 193 million followers. Although the Federal Communication Commission announced in February that AI-generated audio clips in robocalls are illegal, deepfakes on social media and in campaign advertisements are yet to be subject to a federal ban. A growing number of state legislatures have begun submitting bills to regulate deepfakes as concerns about the spread of misinformation and explicit content heighten on both sides of the aisle. In September, with less than 50 days before the election, California Gov. Gavin Newsom signed three bills that target deepfakes directly — one of which takes effect immediately. AB 2839 bans individuals and groups “from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content.” This ban would take effect 120 days before an election and 60 days after it, an aim at reducing content that may spread misinformation as votes are being counted and certified. “Signing AB 2839 into law is a significant step in continuing to protect the integrity of our democratic process. With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally altered content that can interfere with the election,” said Gail Pellerin, the chair of the Assembly Elections Committee.According to Public Citizen, 25 states have now either signed a bill into law that addresses political deepfakes or have a bill that is awaiting the governor’s signature. Do you know how to spot a deepfake?According to cyber news reporter and cybersecurity expert Kerry Tomlinson, “a deepfake is a computer-created image or voice or video of a person, either a person who doesn’t exist but seems real, or a person who does exist, making them do or say something they never actually did or said.”Tomlinson says there are several giveaways to identify a deepfake. Objects and parts of the face, such as earrings, teeth or glasses, may not be fully formed. Pay attention to the breathing. The speaker takes no breaths while speaking. Ask yourself: Is the message potentially harmful or manipulating?Can the information be verified?Ultimately, Tomlinson encourages people to “learn about how attackers are using deepfakes. Learn about how politicians and political parties are using deepfakes. Read about it. It’s as simple as that.”
You’ve probably seen the word “deepfakes” in the news lately, but are you confident you would be able to spot the difference between real and artificial intelligence-generated content?
During the summer, a video of Vice President Kamala Harris saying that she was “the ultimate diversity hire” and “knew nothing about running the country” circulated on social media. Elon Musk, the owner of X, retweeted it. This was, in fact, a deepfake video.
By posting it, Musk seemingly ignored X’s own misinformation policies and shared it with his 193 million followers.
Although the Federal Communication Commission announced in February that AI-generated audio clips in robocalls are illegal, deepfakes on social media and in campaign advertisements are yet to be subject to a federal ban.
A growing number of state legislatures have begun submitting bills to regulate deepfakes as concerns about the spread of misinformation and explicit content heighten on both sides of the aisle.
In September, with less than 50 days before the election, California Gov. Gavin Newsom signed three bills that target deepfakes directly — one of which takes effect immediately.
AB 2839 bans individuals and groups “from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content.”
This ban would take effect 120 days before an election and 60 days after it, an aim at reducing content that may spread misinformation as votes are being counted and certified.
“Signing AB 2839 into law is a significant step in continuing to protect the integrity of our democratic process. With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally altered content that can interfere with the election,” said Gail Pellerin, the chair of the Assembly Elections Committee.
According to Public Citizen, 25 states have now either signed a bill into law that addresses political deepfakes or have a bill that is awaiting the governor’s signature.
Do you know how to spot a deepfake?
According to cyber news reporter and cybersecurity expert Kerry Tomlinson, “a deepfake is a computer-created image or voice or video of a person, either a person who doesn’t exist but seems real, or a person who does exist, making them do or say something they never actually did or said.”
Tomlinson says there are several giveaways to identify a deepfake.
- Objects and parts of the face, such as earrings, teeth or glasses, may not be fully formed.
- Pay attention to the breathing. The speaker takes no breaths while speaking.
- Ask yourself: Is the message potentially harmful or manipulating?
- Can the information be verified?
Ultimately, Tomlinson encourages people to “learn about how attackers are using deepfakes. Learn about how politicians and political parties are using deepfakes. Read about it. It’s as simple as that.”