Trump’s AI paradox: Normalizing political deceit | Opinion

Trump’s AI paradox: Normalizing political deceit | Opinion


The op-ed below does not necessarily reflect the views of the University Daily Kansan and its members.

Earlier this month, OpenAI released Sora 2, an AI model that creates some of the most realistic fake videos I’ve ever seen. Watching it create short clips of people, perfectly capturing their face, voice and emotions, scared me because of the reality-warping powers of artificial intelligence. 

American lawmakers, however, seem to lack any level of concern for the negative implications of AI. There is not a single piece of comprehensive federal legislation regulating the development or use of AI models. However, this doesn’t mean our leaders are ignoring this technology. 

The issue that could prove to be even more dangerous than legislative silence is the way the Trump administration is normalizing the use of artificial intelligence in political messaging. What used to be taboo has, in typical Trump fashion, become one of the president’s favorite tools. 

In September, Trump posted a deepfake video on his Truth Social page of Senate Minority Leader Chuck Schumer and House Minority Leader Hakeem Jeffries. An artificial imitation of Schumer’s voice says, “If we give all these illegal aliens health care, we might be able to get them on our side so they can vote for us.”

In the video, Jeffries stands to the left wearing a mustache and a sombrero.

Although this specific example is difficult to take seriously, AI has been used to spread misinformation that is much more subtle and believable. For example, before the Democratic primary elections in 2024, voters in New Hampshire received calls in which an AI impersonation of Joe Biden’s voice told them not to vote in the election. 

Months later, Donald Trump posted AI pictures of Taylor Swift and her supporters wearing “Swifties for Trump” shirts, falsely implying that the singer was endorsing his campaign. 

Trump’s normalization of what can only be described as an advanced tool of deceit is incredibly dangerous for American politics. As we quickly lose the ability to distinguish between reality and artificial lies, one of the most important defenses we have against misinformation is evaluating the source of any given message. 

When trusted leaders like the president of the United States intentionally post fake content, it undermines our ability to count on well-known sources to post real information. This behavior also opens the door for any future official or political candidate to incorporate blatant AI falsehoods into their messaging. 

In a society where misinformation travels six times faster than the truth, such an ability poses a serious threat to democratic elections that depend on well-informed voters. As AI continues to advance quicker than our laws can keep up, this threat continues to get more dangerous. 

Even months ago, it was easy to laugh at the thought of people being fooled by AI-generated slop. Only someone’s technologically illiterate grandparent would ever fall for a deepfake video riddled with fuzz and wonky physics. But that objectively isn’t the case anymore. Even the first version of Sora from last year, which now looks like garbage compared to Sora 2, made videos that were still difficult for many Americans to identify as fake. 

As important as it is, the surface-level danger of AI in politics has been clear for years. Hopefully, we all know that it can be used to create fake images, videos, audio and social media accounts to mislead voters. 

A new problem is emerging as AI becomes more capable and Americans become more skeptical of all content they encounter: real evidence can be falsely shrugged off as artificial. 

It’s already happened.

Last month, after a video emerged of someone throwing multiple items out of a White House window, Trump falsely dismissed the video as fake.

“If something happens that’s really bad, maybe I’ll just have to blame AI,” he said. 

This mindset is very dangerous because it completely undermines political accountability. Trump knows that his supporters will believe anything he says. Research shows that Americans will endorse misinformation that affirms their political beliefs even if they know it’s false. Therefore, Trump can do whatever he wants and then blame it on AI if he ever gets called out.

This isn’t a Trump exclusive problem; he’s just the one who opened Pandora’s box. Confirmation bias is incredibly powerful. It allows future politicians to maintain support after being exposed for wrongdoing by claiming the evidence is AI-generated. 

Trusting content you see online will get you fooled by AI, but being skeptical of content you see online will lead you to brush aside real information when you don’t want to believe it. 

If everything can be faked, then everything can be denied. Believe what you will, but politics will never be the same.

Mason Renner (masonrenner@ku.edu) is a freshman from Kansas City, MO, studying journalism with a minor in political science. His interests are politics, economics, and current events. He enjoys reading, writing, doing research, and sharing that passion with others.

This article was edited by Opinion Editor Arien Roman-Rojas. If the information in this article needs to be corrected, please contact arienroman@ku.edu. We want to hear from you!

The University Daily Kansan accepts Letters to the Editor as an open forum for individuals to voice their concerns, opinions and thoughts in our Opinion section. If you are interested in sharing a written piece, find more information about our guidelines here and send your article to editor@kansan.com.



Content Curated Originally From Here