Dizzying Deepfakes and Personalized Propaganda: Welcome to the AI Election

Dizzying Deepfakes and Personalized Propaganda: Welcome to the AI Election

I’ve been spending an inordinate amount of time lately staring at AI-created content. I’ve watched endless AI-generated videos of things like Will Smith and Donald Trump eating spaghetti together. I’ve seen AI-created photos of people giving TED Talks who aren’t actually real. I’ve read and listened to AI-produced stories that are going viral on TikTok. All of this content is generated by algorithms that grow more sophisticated by the day, sometimes even by the hour. And for me, consuming it is part professional curiosity, part morbid fascination, and part the beat I’ve signed up to cover so I can understand the future and how we’re all going to live in it.

I got a scary glimpse of that future recently when news broke that Trump had accused Kamala Harris of using AI to fake her Michigan rally crowd at the Detroit Metro Airport. My first instinct when I looked at the picture on my computer of the massive crowd wasn’t skepticism but genuine uncertainty. Leaning forward to get closer to the screen, I really did wonder if the image was made by an AI.

It didn’t take long for me to realize it wasn’t fake. Fact-checkers and news outlets confirmed the crowd of 15,000 at Detroit Metro Airport. But my initial doubt revealed a troubling side effect of our new, AI-saturated world: Once you start living in the land of AI, your brain starts to question everything—whether you like it or not. You begin to get a creeping suspicion that even the most straightforward images are indeed fake.

The 2024 election cycle has been a turning point in the use of AI for political manipulation. We witnessed an unsettling parade of digital deceptions not too dissimilar to the post-truth era that we experienced in 2016—only this time, more confusing, and often more terrifying. In January, for example, there were AI-generated robocalls using a deepfake of President Joe Biden’s voice targeting New Hampshire voters, falsely urging them to abstain from the Democratic primaries—a chilling demonstration of AI’s potential to create convincing yet entirely fabricated audio content. Then there’s been the constant flood of AI content on social media platforms, including images of Taylor Swift endorsing Trump (she did not endorse him), videos of political rallies with crowd sizes that were actually manipulated, and memes depicting political figures in fictional scenarios (like Trump holding a gun while wearing an orange sweater, or Biden doing the same thing in a wheelchair).

Digitally-created content was everywhere, with AI-generated audio clips circulating on TikTok, claiming that Biden was threatening to attack Texas, an image of Harris at a Communist-style rally, and posts that prominent figures were endorsing candidates they had not. At one point, the White House had to intervene, confirming that audio of Biden threatening to attack Texas was fake. Meanwhile, The New York Times was forced to put out a statement recently saying it did not “publish an article legitimizing a false claim that Vice President Kamala Harris was a member of the Communist Party.”

Even attempts to harness AI for ostensibly legitimate campaign purposes have raised ethical concerns, as evidenced by a super PAC supporting Dean Phillips’s failed presidential bid, which created an AI-powered interactive bot using OpenAI’s technology that was designed to engage voters. The bot’s creation ultimately led to OpenAI suspending the creator’s account, citing its policy against using its tools for political campaigns and underscoring the complex ethical landscape surrounding AI’s role in political discourse.

As we process the implications of the first true “AI election,” it’s clear that we’re entering uncharted territory. The line between fact and fiction is blurring at an alarming rate. But this election cycle isn’t the worst-case scenario; it’s the harbinger of what’s to come. By the time we get to the 2026 midterms, AI will be so much more advanced that in the hands of the right (or wrong) people, it’ll be able to generate hyper-realistic video content, which could be used to create personalized political narratives tailored to each voter’s psychological profile—both drawing on your biggest fears and your deepest desires.

Indeed, the next wave of AI advancements is poised to reshape future elections in ways that might seem as surreal today as AI video did a decade ago. AI agents, which are autonomous programs capable of making decisions and interacting with humans in increasingly autonomous and sophisticated ways, are expected to become the next iteration of this technology. And while they’ll start out innocuous enough—think: AI assistants managing your calendar and emails, or AI travel agents that will book trips for you and your family, or an AI therapist available 24 hours a day to help with mental health concerns—these agents will obviously (and quite quickly) be used in negative ways, especially during election cycles.

For instance, they might be used to target us individually based on our biomarkers. Sorry, I forgot to mention AIs will soon have more information about us on a biological level, including our health and behavior. Why? you might ask. Because you’ll give it to them through apps and programs you’ll engage with, or are already engaging with. For example, when you ask an AI about a medication you’re on, or solicit it for recipe ideas for dinner, or ask questions about an illness, it now knows all that information about you. The more information they get, the more these AIs will understand voter preferences with unprecedented accuracy. This means political campaigns that could tailor messages not just to your voting history but to your physical reactions—measured by changes in heart rate or skin conductance through a camera, as MIT was able to do in research labs, on through your phone or TV as you consume media, or just through the things you type into your computer. (Don’t forget: Every time you write a prompt for an AI, it’s learning something about you.)

Not scared yet? Wait until you see how deepfake technology will become even more sophisticated. We may soon face a reality where AI-generated videos are indistinguishable from genuine footage, allowing for the creation of synthetic political content that could sway even the most discerning voters. And if you think AI will be able to detect other AI, just look at what’s happened with text in the last year: When ChatGPT first debuted in November 2022, AI-detection technology could distinguish between what was made by an AI and what was made by a human being with 95% accuracy. But as AI models have become more sophisticated, the accuracy of these detection tools has fallen to 39.5% accuracy. Soon that number will likely plummet to close to zero.

The nightmare scenario for the next election is one where all these technologies essentially meld together like the T-1000 in Terminator 2 after he’s turned to liquid metal. We’ll be facing an electoral landscape where AI agents, armed with our biodata and psychological profiles, create hyper-personalized deepfake content in real-time targeted specifically at you. These shape-shifting digital Dementors could adapt their messaging on the fly, morphing from a trusted news anchor to your favorite celebrity, all while tailoring their words to your subconscious desires and fears. They’ll know when you’re most susceptible to persuasion based on the queries you type into your favorite AI—or by then, speak to. They’ll also know this based on your sleep patterns, and of course, your good ol’ browsing history. They’ll probably even be able to predict your voting behavior before you’ve made up your own mind.

We mere humans won’t stand a chance at distinguishing fact from fiction. We’ll be living in a perpetual state of uncertainty, where every piece of political information we encounter could be a carefully crafted illusion designed to manipulate our beliefs and behaviors. And—yes, there’s an and—this isn’t some distant dystopian future. It’s the world we are rapidly hurtling toward, and tens of millions of us have our foot on the gas. If you think you’ll be able to get past AI’s clever ways, take it from me, given how I felt after I questioned the photo of Kamala Harris’s rally: that fleeting inability to trust my own eyes was incredibly unnerving. Soon, that won’t be a momentary lapse—it’ll be our constant reality. And I can assure you, from personal experience, living in a world where you can’t trust your own perception is as unsettling as it sounds.

Originally Appeared Here