Dean Jackson, a Tech Policy Press fellow, and Zelly Martin, a researcher at the University of Texas at Austin Center for Media Engagement, are among the authors of a report titled Political Machines: Understanding the Role of AI in the U.S. 2024 Elections and Beyond, published this month. The report was the subject of a recent Tech Policy Press podcast that included the authors.
In May, OpenAI rolled out a new model capable of speaking to users naturally and in real-time. But in politics, generative artificial intelligence’s (AI’s) greatest impact might come from its ability to listen.
We spent the first months of 2024 interviewing political professionals to find out what they think about generative AI and how they are using it for US elections. Some were skeptical that the new technology would be a game changer or anything more than an efficiency booster for overworked staff. Others were all-in, predicting a Cambrian explosion of political innovation.
What struck us, however, was the degree to which generative AI’s ability to create public-facing content has been overreported compared to its ability to collect, analyze, and summarize voter insights for political use behind the scenes.
Our first hints of this emerged in conversations with Ilya Mouzykantskii, a vendor of generative AI solutions for politics, and Shamaine Daniels, a candidate for public office who used an interactive AI robo-caller named Ashley to reach voters. Bucking the trend of AI doomerism, both were excited about the “amazing” democratizing potential of generative AI. Unlike calls from a traditional human volunteer, every conversation Ashley has with a voter can be recorded and transcribed. Generative AI can then summarize those transcripts, distilling a number of pages that might have been unreadable during a campaign into a short list of key takeaways and action items.
Other interview participants told us that the use of LLMs to do a “first pass” at data analysis is already becoming common. The paid versions of ChatGPT and other tools allow analysts to upload spreadsheets and other files and query LLMs for insights about them. This could be especially useful for smaller campaigns, which often lack the resources, staff, and technology of their larger equivalents. Shamaine told us that in 2023, it cost $15,000 to field a two-question poll. Ashley cost her campaign much less.
Daniels said that Ashley had another advantage over traditional pollsters: she can ask open-ended questions and meaningfully analyze the answers, essentially conducting a focus group at scale. This could upend previous campaign dynamics if used effectively. Campaigns might rely less on consultants advocating for untested messaging strategies, and instead be able to have a two-way conversation with the public. “Right now almost all conversations about campaigns and voters are about campaigns throwing out a statement and voters catching it,” Daniels told us. She predicted that If she had this level of social listening technology in earlier cycles, her platform might have been much different.
Mouzykantskii, the vendor behind Ashley, extended this idea beyond campaigns to governance, inviting us to imagine “the ability to have conversations with your constituents en masse and have those conversations be natural and know that those ongoing conversations are heard and impact policymaking.”
Above, we’ve described some of the potential opportunities from generative AI and data-driven politics. The dangers also merit discussion.
Daniels’ and Mouzykantskii’s optimism was tempered by concerns about the unregulated use of this technology, which were common to many of the political consultants we interviewed. Even Republican interviewees who professed they are generally wary of government regulation often said they believe some kind of guardrails are necessary.
The most obvious scenario of political actors misusing a tool like Ashley looks something like the robocalls in New Hampshire appropriating a synthetic version of US President Joe Biden’s voice to suppress votes in the Democratic Primary. Those calls sounded like Biden, but were not interactive like Ashley; the future could see misleading, fully interactive chatbots on the phone with prospective voters. To avoid allegations of misrepresentation, Ashley used a robotic voice and disclosed its artificial nature. But Mouzykantskii warned us that many AI chatbots sound almost the same as a human, and that with near-term improvements “they will sound exactly the same.” If this technology goes mainstream in politics, most of the guardrails against AI-enabled deception are normative and self-imposed. Most of our interview participants were skeptical that those guardrails would outlive the first real proof-of-concept for AI-fueled perfidy.
Generative AI’s ability to digitize what have traditionally been analog conversations with voters could provide another step forward in the datafication of politics—which extends back past Barack Obama’s campaign to Karl Rove and earlier strategists. Since the 2016 campaign and the 2018 Cambridge Analytica scandal that followed it, though, observers have focused on the risks of this trend. Generally, they argue that more data means more precise targeting of messages to voters, and thus more persuasive power. Generative AI for social listening provides another rich vein of data for consultants to mine.
It also pushes the trend down from the federal level and into smaller state and local races, which typically do not have the kind of rich data or sophisticated technology available to nationwide operations. (Anyone who has knocked doors for a city council or state legislative race has likely noticed the frequent inaccuracy of canvassing lists and other campaign resources.)
While some of our interview participants made bold predictions, most hedged their bets. Some cautioned that the political industry would be slower to adopt AI technologies than other sectors, because campaigns are ephemeral organizations without permanent staff or dedicated infrastructure for continuous training and learning. As such, we are all still early on in generative AI’s political impact. Rather than rush to conclusions about the arguments above, we encourage readers to consider these uncertain questions.
First, social listening itself is not a new technology, nor is political micro-targeting. Academics are still debating the power and significance of these technologies, with studies offering mixed assessments. In interviews, Democratic consultants Mike Nellis and Roy Moskowitz told us that many of today’s social listening and micro-targeting tools are poor and prone to error. Generative AI could improve this situation, but from what benchmark?
Robocallers like Ashley also face a very low-tech, human problem: as Democratic consultant Taryn Rosenkranz reminded us, any political communication technique that becomes ubiquitous risks devolving into spam. How many people hang up on interactive robocallers, or don’t answer in the first place? Will that number differ meaningfully from human callers? And for those who do pick up the phone, how will they react to an AI robocaller? Some might find the technology novel at first, then tiresome if it becomes too common. Others might find the idea of talking to a robot unnerving or offensive. Time will tell; for now, Daniels told us, Ashley’s longest conversation with a voter lasted about three minutes.
Future studies should put science ahead of folk theories by looking more closely at how voters respond to these techniques. They do not have to wait until after the 2024 election; in India, the era of AI campaigning is already here. Early observations suggest some potential for deception, but the largest use case appears to be a further meme-ification of politics, blending entertainment and parasocial relationships with national—or even deceased—politicians. “The Indian elections are a signal for how trusted relationships will be forged and destroyed in an era of hyper-realistic, hyper-personalized content, customized in regional languages, and distributed en masse,” wrote Vandinika Shukla for this site.
India’s experience echoes the predictions of Vincent Harris, a Republican consultant who worked on a chatbot version of Miami Mayor Francis Suarez. In an interview, Harris said:
I reject the term ‘false representation.” It’s a different representation. It’s a digital representation… The media is trying to say this is misleading voters, but voters are not fools. Voters understand the difference between AI and the real Suarez… I can see instances where people are trying to do this from a negative perspective. But think of it from a positive perspective. This tech allows voters, young voters, the next generation, to engage with a digital representation of a candidate.
What India might preview, and what Harris predicts, is a wholesale reinvention of what it means to be “authentic” in politics and how voters relate to individual politicians. That would be a bigger deal than almost any deceptive deepfake.
Ultimately, the most vulnerable individuals likely to be affected by these trends are not voters; they are children. AI chatbots are already being piloted in classrooms. “Children are once again serving as beta testers for a new generation of digital tech, just as they did in the early days of social media,” writes Caroline Mimbs Nyce for The Atlantic. The risks from generative AI outputs are well documented, from hallucinatory responses to search inquiries to synthetic nonconsensual sexual imagery. Given the rapid normalization of surveillance in education technology, more attention should probably be paid to the inputs such systems collect from kids.
Similar observations apply to adults. Not every AI problem requires a policy solution specific to AI: a federal data privacy law that applied to campaigns and political action committees would go a long way toward regulating generative AI-enabled social listening, and could have been put in place long before that technology became widely accessible. The fake Biden robocalls in New Hampshire similarly commend low-tech responses to high-tech problems: the political consultant behind them is charged not with breaking any law against AI fakery but with violating laws against voter suppression.
In the long-term, though, regulation can influence but not control how generative AI affects our culture. Those changes will inevitably flow downstream to affect politics. The nature of these coming shifts is still uncertain. They deserve significantly more attention.