How the Global Engagement Center hopes to fight deepfakes abroad

How the Global Engagement Center hopes to fight deepfakes abroad

As the Global Engagement Center fights for funding, a set of documents show what staffers at the State Department’s outfit for combatting foreign disinformation abroad might hope to do in the fight against AI-modified and or AI-generated misleading images.

The documents, obtained by FedScoop via a public records request, show that the GEC has custom-built artificial intelligence to detect these kinds of images, including an algorithm that one presentation says identifies “which accounts are using AI-synthesized images of people.” Still, the GEC has not found significant use of deepfakes in its research, one undated presentation notes. Another presentation, dated February 2023, noted that deepfakes weren’t a current concern, but should be revisited in the next six months or so. 

The State Department declined to comment on the documents. 

In the short term, the agency hoped to design a system for detecting photoshopped images, in addition to exploring the idea of a “meme detection” model to help understand a corpus of images it might be analyzing. Eventually, GEC hopes to build a detector for Stable Diffusion, a type of generative AI functionality, as well as a detector for understanding when AI had been used within a full-motion video, according to the documents.

A related proposal discussed building a model that would indicate the likelihood of a photo being photoshopped, while another floated the possibility of automating reverse image searches. 

“Synthetic images could be deployed in information campaigns with specific, targeted goals,” one presentation noted. “The main guardrails are developers’ self-imposed input.”

The documents also include studies of the use of deepfakes related to political discourse in Nicaragua, Russia, and Cuba, which FedScoop has made available here. 

Notably, the State Department recently announced an interagency task force focused on coordinating with other countries on AI-manufactured synthetic content. It’s not clear where that work — as well as the anti-deepfake work revealed by these documents — might go under a Trump administration. Some Republicans are eager to see the elimination of the GEC, which will lose funding at the end of the year without Congressional action.

The GEC reports, mostly from 2022 and 2023, seemed on par with the kind of work being done at the time, said Siwei Lyu, a professor at the University of Buffalo who studies deepfakes. However, the research would be “more or less obsolete by today’s standards, both in terms of academic research or commercial efforts,” he said. 

“The general methodology still works, i.e., developing deepfake detectors and apply them in a proactive approach,” Lyu told FedScoop. “The challenges are whether the detection methods can keep up with the advancement of deepfake generation and can scale up to handle the increasing number of incidents.”

Since the time the documents were made, Lyu noted that the use of audio deepfakes has grown. The use of diffusion-based image generation has fixed issues created by general adversarial network-based models, he added. 

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies.

Previously she was a reporter at Vox’s tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications.

You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Originally Appeared Here