The personalization of social media has expanded the reach and power of misinformation.
Popularized by the surge of TikTok and its “For You” page — an infinite stream of content rooted in users’ interests, determined by information farmed from browsing histories, engagements and location — platforms like Instagram, YouTube and X have adopted the model and created their own versions. Instagram started showing recommended posts on its main feed in 2018, and in 2020 implemented Reels, a TikTok-esque short-form video feature. YouTube introduced Shorts, a similar concept, in the same year and X added a “For You” tab of its own in early 2024.
These developments have transformed how users consume content on social media, said Sejin Paik, product manager at TrueMedia.org. “It doesn’t matter who you follow, you’re going to get content through the agency of what their system thinks,” she said.
False information exists alongside factual content in this digital environment, giving cover to deepfakes — hyper-realistic images or videos artificially manipulated to show someone doing or saying something. In the run-up to the 2024 U.S. election, deepfake videos depicting speeches that were never given, pictures of Donald Trump’s Secret Service bodyguards smiling after he was shot in July, and screenshots of news articles pushing election misinformation appeared alongside legitimate news, blurring the lines between what is real and what is not.
As generative AI technologies develop, become easier to use and more accessible, it will become only more difficult to gauge the authenticity of social media posts. An AI detection tool created by TrueMedia intends to help by identifying signs of manipulated pictures and videos posted on social media.
Deepfakes and disinformation
Artificial intelligence expert Oren Etzioni founded TrueMedia in January 2024, motivated by concerns he had surrounding AI’s impact during an election year. A nonprofit organization of researchers, engineers and social scientists, TrueMedia aims to create technology that addresses societal problems — what Paik calls “sociotechnology.”
As the technologies became publicly available, artificially-generated content has proliferated as a tool for political manipulation, and journalists fear its impact will only grow as it improves.
The “For You” page model gives this more sophisticated misinformation a wider reach, said Paik. Posts gain traction by taking advantage of the algorithms that decide what is popular, regardless of the accounts behind them. Information presented in users’ feeds generally conforms to their interests and beliefs, and the content displayed — real or not — is personalized to farm likes and reshares that expand the networks they touch.
Deepfakes have tremendous potential in this environment. They can depict anything from Pope Francis in designer clothes to entire fake newscasts, and their use is increasing exponentially: over 500,000 deepfakes were shared in 2023. However prevalent the content already is, journalists say that the AI revolution is only beginning.
Detecting deepfakes
Journalists can use TrueMedia’s flagship deepfake detector to identify whether a video or image was created with AI.
The tool is simple: users submit a social media link to the detector which runs the content through a series of AI detection softwares created by partner technology companies to determine the percentage likelihood that the content is artificially generated.
The tool is not able to detect all false content, Paik cautioned. For instance, it struggles to detect “cheapfakes” — misleading photos or videos created by humans using non-AI editing software. Misinformation spreaders have also started to create workarounds, like layering deepfakes over real media, to circumvent the detection process.
Ultimately, as the power of AI grows so will the tools that detect them. “We’re far away from being able to hit 100% of the time, but this is one of those very smart ways to get closer,” said Paik. “If people are creating AI deepfakes, we’re going to use AI to combat that.”
Pairing detection with journalism
As the flood of faux content inevitably continues on social media, journalists must not rely solely on detection to combat deepfakes, Paik urged — they must explore the misinformation’s sources, reasoning and impact.
For example, false AI-generated posts about recent hurricanes in the U.S. depicting flooded and destroyed communities proliferated on users’ social media feeds. Though some who reposted such pictures and videos knew they were fake, including politicians, they evoked emotional responses and were used to push inaccurate claims about the government’s response to the catastrophes.
Most importantly, journalists must think about why these inaccurate posts trend, Paik said, and work to counteract those narratives beyond just fact checking a video’s accuracy.
“Saying, ‘Oh, we detected something!’ isn’t good enough,” she said. “Journalists have the power to inform and educate the public. We need to do that.”
Amritha R Warrier & AI4Media / Better Images of AI / tic tac toe / Licenced by CC-BY 4.0.