More political deepfakes exist than you think, according to this AI expert

More political deepfakes exist than you think, according to this AI expert

TrueMedia evaluating a piece of content.

TrueMedia.org

How prevalent are political deepfakes? Most relatively informed citizens can recall major instances of synthetic political content, such as the apparent “robocall” made by President Joe Biden in January to voters in New Hampshire, which turned out to be a synthesized voice created by AI. While there is no authoritative statistic on artificial intelligence (AI) deepfakes, a skeptic might think they’re not common, given that only a high-profile few are widely reported on. 

But, according to one AI scholar, it’s more likely that AI deepfakes are on the rise in advance of the US presidential election in November — you just don’t see many of them.

Also: 80% of people think deepfakes will impact elections. Here’s how to prepare

“I would take an even-odds bet of a thousand dollars that we are going to see an unprecedented set of these [deepfakes]” come November, “because it’s become so much easier to make them,” said Oren Etzioni, founder of the non-profit organization TrueMedia, in an interview with ZDNET last month.

“I would not encourage you to take that bet, because I have more information than you do,” he continued with a laugh. 

TrueMedia runs servers that assemble multiple AI-based classifier programs for the sole purpose of telling whether an image is a deepfake or not. The organization is backed by Uber co-founder Garrett Camp’s charitable foundation, Camp.org.

When a user uploads an image, TrueMedia will produce a label that says either “uncertain,” in a yellow bubble, “highly suspicious,” in red, or a green bubble with “authentic” if the AI models have a degree of certainty it’s not a deepfake. You can see a demo and sign up for beta access to the program here.

Etzioni, a professor at the University of Washington, is also founding chief executive of the Allen Institute for AI, which has done extensive work on detecting AI-generated material.

oren-etzioni-headshot.png

“We see trial runs, we see trial balloons, we see people setting things up” to produce many more deepfakes as elections arrive later this year, and not just in the US, says Etzioni.

TrueMedia.org

Etzioni thinks it’s correct to say that there “isn’t an enormous amount” of deepfakes circulating publicly at the moment. However, he added that much of what’s actually out there probably goes unnoticed by the general public. 
“Do you really know what’s happening in Telegram?” he pointed out, referring to the private messaging service. 

Also: As AI agents spread, so do the risks, scholars say

The founder said TrueMedia is seeing evidence that deepfake creators are ramping up production for later this year, when election season intensifies. “We see trial runs, we see trial balloons, we see people setting things up,” he noted — and not just in the US. 

“This is the year that counts, because we are coming up against a series of elections, and the technology [of deepfakes] has gotten so prevalent,” he explained. “To me, it’s a matter of when, not if, there are attempts to disrupt elections, whether at the national level or at a particular polling station.”

To prepare, TrueMedia has built a web of capabilities and infrastructure. The organization runs its own algorithms on potential deepfakes while paying collaborating startups such as Reality Defender and Sensity to run their algorithms in parallel, to pool efforts and cross-check findings. 

“It really requires a grassroots effort to fight this tsunami of disinformation that we’re seeing the beginnings of,” Etzioni said. “There’s no silver bullet, which means that there’s no single vendor or a model that gets it all.”  

Also: What are Content Credentials? 

To start, TrueMedia tunes a variety of open-source models. “We run classifiers that say yes or no” to each potential deepfake, Etzioni said. Run as an ensemble, the classifiers can pool answers from each model — and the team is seeing over 90% accuracy.

“Some of these classifiers have generative models embedded in them, but it’s not the case that we’re just running Transformers,” Etzioni continued, referring to the ubiquitous “attention” model of AI that gave way to several others, including GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama. 

TrueMedia is not, for the moment, offering source code for the models, nor publishing technical details or disclosing training data sets. “We are being circumspect about sources and methods, at least right now,” said Etzioni. “We’re a nonprofit, so we’re not attempting to create anything proprietary — the only thing is we’re in an unusual position because we are in an adversarial landscape.”

Etzioni expects further disclosure can happen in time. “We just need to figure out the appropriate structures,” he said.

Also: Google’s VLOGGER AI model can generate video avatars from images – what could go wrong? 

To support running all these models, TrueMedia has enlisted the help of startup OctoAI, which was founded by Etzioni’s friend and colleague at the University of Washington, Luis Ceze. 
OctoAI, which cut its teeth improving AI performance across diverse computer chips and systems, runs a cloud service to smooth the work of training models and serving up predictions. Developers who want to run LLMs and the like can upload their model to the service, and OctoAI takes care of the rest.

TrueMedia’s inference needs are “pretty complex, in the sense that we’re both accessing vendor APIs, but also running many of our own models, tuning them,” said Etzioni. “And we have to worry a lot about security because this is a place where you can be targeted by denial-of-service attacks.”

Also: Serving generative AI just got a lot easier with OctoAI 

luis-ceze-headshot-2024.png

“As we get closer to the elections, we expect the volume to be pretty high” for performing queries against the TrueMedia models, says OctoAI founder Luis Ceze. “We want people to not have to wait for too long or maybe lose patience.”

OctoAI

TrueMedia and its collaborators are expecting a rising tide of deepfake queries. “Especially as we get closer to the elections, we expect the volume to be pretty high” for performing queries against the models, Ceze said in an interview with ZDNET. 

The coming increase means speed and scale are a concern. “We, as a society in general, want people using it, and want the media to use it and validate its images,” Ceze added. “We want people to not have to wait for too long or maybe lose patience.”

“The last thing we want is to crumble under a denial-of-service attack, or to crumble because we didn’t set up auto-scaling properly,” said Etzioni. According to him, TrueMedia already has thousands of users. He anticipates that, as the year rolls on, “we will have several of the leading media organizations worldwide using our tools, and we’ll easily have tens of thousands of users.”

So how will TrueMedia know if it is having an impact? 

“Will we prevent the election from being swayed?” Etzioni mused. “You know, you can use fancy words like ‘protect the integrity of the election process’; that’s too grandiose for me. I just want to have that tool be available.”

Also: All eyes on cyberdefense as elections enter the generative AI era

For Etzioni, the goal is to create transparency — and conversation — around deepfakes. TrueMedia’s content labels are public, so viewers can share, compare, and contest their findings. That extra check is important: even with more than 90% accuracy, Etzioni admits that TrueMedia’s ensemble isn’t perfect. “The way ChatGPT can’t avoid hallucinations, we can’t avoid making errors,” he said. 

Skeptics will question the kind of influence wielded by a non-profit with no disclosed source code, whose training data sets are not open to public scrutiny. Why should anyone trust the labels that TrueMedia is generating? 

“I think that’s a fair question,” said Etzioni. “We’re only six weeks old, we have more disclosure to do.” 

That said, Etzioni emphasized the openness of TrueMedia’s approach, as compared to other options. “We are an open book in terms of results, unlike a lot of the other tools out there that are available under the hood or for sale,” he said. 

TrueMedia expects to publish an analysis of the current state of deepfakes in the coming weeks. “We’re getting people to think more critically about what they see in media,” he said. 

Originally Appeared Here