(The Center Square) — Maine lawmakers are considering a proposal that would require disclaimers on the use of artificial intelligence tools by political operatives to make deceptive “deep fake” ads.
The proposal, which goes before the Legislature’s Committee on Veterans and Legal Affairs on Friday, wouldn’t ban the use of AI by political candidates and groups, but would require a disclosure statement saying the communication “contains audio, video and/or images that have been manipulated or altered.”
The bill’s primary sponsor, Rep. Amy Kuhn, D-Falmouth, said at least 24 states have enacted laws banning deepfake content in campaigns and elections. She said many of those laws have passed with “strong or even unanimous bipartisan support because they have the potential to impact all of us, and the institutions — like free and fair elections — that we all rely on.”
“Without regulation, deepfakes are likely to further exacerbate voter confusion and a loss of confidence in elections,” she wrote in testimony in support of the bill. “It is critical that the public maintains trust in our elections or else we risk losing their participation.”
Broadcasting companies have raised concerns about provisions of the bill, arguing that it would hold them responsible for content that they don’t create and have asked lawmakers to amend the measure to reduce their liability.
“Providers like Comcast, who distribute video content and sell advertising, typically do not have any role in creating the content and advertising inventory that they sell, nor are they able to review the content and advertising and discern in every instance when artificial intelligence has been used,” Chris Hodgdon, Comcast’s vice president of government affairs in Maine, said in testimony.
A deepfake is a computer-generated manipulation of a person’s voice or likeness using machine learning to create visual and audio content that appears to be real. The technology was born out of the AI revolution and is being used to generate fake imagery for anything from “revenge porn” to political mudslinging.
Secretary of State Shenna Bellows, a Democrat, said her office hasn’t taken a position on the bill but says the increased accessibility of AI tools to political campaigns and “rogue actors” misinformation has been on the rise in recent election cycles.
“One frustration that we hear over and over from voters, especially as campaign advertising periods pick up, is an inability to tell what is true and what isn’t,” she said in recent testimony. “The increased availability of artificial intelligence to create realistic-seeming photos and videos quickly and cheaply will only supercharge these fears.”
In August, the Federal Election Commission voted to begin the process of regulating AI-generated deepfakes in political ads ahead of the 2024 election. The panel held a 60-day public hearing process, but has yet to take action on any new regulations.
The FEC has also been feuding with the Federal Communications Commission over its efforts to require AI disclosures for on-air broadcasts, arguing the agency does not have the jurisdiction to do so. The FCC argues that it does.
A 2024 report issued by the Congressional Research Service, a public policy research arm of Congress, warned that deepfakes could also be generated by rogue countries or foreign adversaries to meddle in the upcoming presidential elections.