Welcome to another edition of Neural Notes, where I look at some of the most interesting AI news of the week. In this edition: How AI was utilised during the US election, particularly on the big day itself as people searched for information on the outcome. And what could this mean for the Australian federal election in 2025?
Both sides of the aisle employed AI during this US election cycle – from using it for data analysis (such as voter sentiment) to microtargeted campaigning through message crafting and ad creation.
As predicted, it was also used for deepfakes and the spread of misinformation. Back in February a slew of robocalls in New Hampshire included an audio deepfake of president Joe Biden. In the audio the faux president discouraged people from voting. This resulted in the FCC banning robocalls with AI-generated voices and the consultant who created these calls was indicted.
Perhaps one of the most notable uses was when Donald Trump shared AI-generated images of Taylor Swift and Swifties endorsing his campaign.
But what about access to real-time information?
In the two years since OpenAI was released, the consistent sentiment is that chatbots aren’t advanced enough to provide up-to-date information. In addition to that, there’s the issue of hallucinations.
But that didn’t stop people from utilsing popular generative AI platforms for Election Day information. And it certainly provided a revealing at look at how the platforms performed when it came to real-time, accurate election information.
US Election Day: Winners, losers and neutral players
Winner: Perplexity AI
Perplexity AI has been making waves this year after being co-founded by a former OpenAI researcher back in 2022.
The platform acts as an AI-powered search engine that offers answers to users with source-citations. While OpenAI has started doing something similar, Perplexity beat it to the punch and its sourcing tends to be more thorough.
From a platform perspective, it provides large, visual breakouts of its sources (for example, news articles) and also suggests relevant follow up questions and “pro search” queries.
In January 2024 the startup raised US$73.6 million at a US$520 million valuation in a round that included participation from Jeff Bezos and Nvidia.
By April, it raised an additional US$62.7 billion and its valuation had ballooned to over US$1 billion.
As of November, the company is said to be finalising yet another funding round, targeting a US$9 billion valuation.
Considering its focus on sourcing and “real-time” information, its not particular surprising that it came out on top for Election Day information.
In the lead-up to the day the platform launched a dedicated Election Information Hub. This included real-time election insights, live maps powered by sources such as the Associated Press, and a swing-state tracker.
Perplexity’s well-designed, well-sourced and frequently updated hub earned it both the attention and trust of users.
Even post-election day it more thorough results to even basic questions such as “who won the 2024 election.”
Loser: Grok
Elon Musk’s AI chatbot, Grok, developed by his company xAI, was launched in November 2023 as a competitor to platforms like OpenAI’s ChatGPT.
Musk was one of the original founders of OpenAI but has since been vocal about his disapproval of the company’s shift towards a for-profit business model.
During the election Grok struggled to maintain neutrality and copped criticism for its handling of political content.
In its “fun mode”, Grok’s responses appeared to favour content supportive of Donald Trump. This bias was evident in Grok’s prioritisation of trending Trump-related posts. Musk was also openly supportive of president-elect Trump and has indicated that he may have a place in his administration.
In the lead up to the election, the platform also came under fire for spreading misinformation about ballot deadlines. It was only after officials from the states of New Mexico, Washington, Minnesota, Michigan and Pennsylvania sent a complaint to Musk that Grok was updated.
This took around 10 days and was estimated to already have been seen by millions of people.
Subsequently the chatbot said “For accurate and up-to-date information about the 2024 US Elections, please visit Vote.gov,” when queried with election questions.
Neutral: OpenAI, Microsoft Copilot and Google Gemini
These platforms took a cautious approach, opting out of providing real-time election updates.
OpenAI’s ChatGPT directed users asking election questions to the likes of the Associated Press and vote.org – even after Donald Trump was declared victor.
Google Gemini restricted responses to election questions and redirected users to traditional Google search results. Similarly, Microsoft Copilot also steered cleared of providing direct election updates and ferried users to external, trusted sources.
What role might AI play in Australia’s 2025 federal election?
The US election proved to be a testing ground for AI’s potential role in democracy, and also provided some valuable insight for Australia as we gear up for our own federal election in 2025.
With a smaller, more targeted voter base, Australia is unique when it comes to AI’s role in shaping election information.
A platform like Perplexity could be a valuable asset for real-time election information in Australia, especially if it sources reliable local data, such as from the Australian Electoral Commission.
But transparency, neutrality and trust are key here – and the latter in particular is still tough when generative AI is still so prone to mistakes, hallucinations and misinformation.
Considering the need for careful checks to ensure a platform isn’t spreading inaccurate or biased data, its fair to argue that a simple Google is still the best approach for election information, despite the drawcard of having a plethora of information delivered to one single tab.
Redirecting election-related queries to trusted sources might be a smart way for generative AI platforms to ensure accuracy and avoid AI-driven errors. In this way, chatbots can support democratic processes without becoming a liability – and receiving public backlash.
The surge of AI-manipulated content in the US election also highlights a real risk for Australia. While our elections aren’t as big or spectacle-driven as the US, misinformation and deepfakes can still spread quickly.
There are also currently no laws against making deepfake videos in Australia.
Earlier this year senator David Pocock highlighted these risks by posting deepfakes of both Prime Minister Anthony Albanese and Opposition Leader Peter Dutton banning gambling ads.
“That video is fake and there are currently no laws against making videos like that and I’m concerned we’re not seeing the urgency required to protect our democracy from generative AI,” Pocock said at the time.
The Australian Electoral Commission has also said that it expects AI-generated misinformation to have an impact on the federal election — and that it doesn’t have the resources to stop it.
We’re still a ways off from the federal election, and we know that AI technology is moving so quickly that the stage may be very different in just a few months time. But what will hopefully remain the same is that AI platforms need to prioritise trust, transparency and reliability if they know they will be used in any way in the democratic process.