Amid an ongoing legal dispute over the public’s right to hear audio recordings of an interview federal investigators conducted with President Joe Biden last year, the administration is pushing a novel theory to support their position that the tapes should be kept secret: fears the recordings could be used to create an artificial intelligence (AI) deepfake. The Department of Justice (DOJ) made this argument as part of a May court filing, and while it does not represent the only rationale for withholding the audio, this line of thinking raises serious concerns around future efforts to shield government records from public disclosure in the age of AI.
The recordings in question relate to Special Counsel Robert Hur’s investigation into Biden’s potential mishandling of classified documents from his time as vice president. Hur ultimately declined to prosecute Biden and issued a final report in February that included a written transcript of the interview. The report included commentary around Biden’s age and hazy memory during the interview, which predictably sparked additional interest in the audio recording and attempts by major media companies, congressional Republicans, and conservative groups to obtain a copy.
What followed were rejections by the Biden administration, issued in the form of executive privilege claims from the president and the DOJ arguing that the recording qualified for exemptions to the federal Freedom of Information Act (FOIA). Left unsaid in the legal filings are the clear political incentives for Biden to keep the presumably unflattering audio out of the hands of his Republican opponent for use in the 2024 presidential campaign.
For the most part, the administration relies on familiar arguments used by past presidents claiming executive privilege and in debates over public record requests. However, they also introduce a new and problematic justification that information should qualify for a FOIA exemption based on its potential use to create or bolster the effectiveness of an AI deepfake that could harm personal privacy.
The administration is quick to point out that past courts prevented audio recordings of past presidents from being publicly released in the 1970s and 1990s, in part over concerns that the audio could be manipulated. However, technology has changed dramatically over the past 50 years. Historically, manipulating audio recordings required access to the original source. With modern AI technology, convincing false audio and video can be generated with next to nothing as the original input so withholding the audio has no practical effect on whether the deepfake gets created.
The DOJ concedes this point, noting that AI audio deepfake of the interview can be generated easily, even without the recording, because both the transcripts and President Biden’s voice already exist in the public domain. Nevertheless, the DOJ claims that withholding the audio is justified—not because it will prevent the creation of a deepfake, but because public knowledge that the true recording is available could make the deepfake version more believable.
The DOJ’s argument is a clear break from the approach federal and state officials have taken to reduce the impact of AI-generated deepfakes and disinformation, particularly as it relates to elections. The regulatory approach thus far has prioritized labeling AI-generated false information and—in a few cases—banning it outright. The DOJ takes this a significant step further, seeking to withhold admittedly true information on the basis that it could be used to make hypothetical false information more believable. Put differently, in a DOJ court filing, “…if the audio recording is not released, the Department or others would be much better able to establish the illegitimacy of any malicious deepfake.”
The obvious problem with this approach is that it empowers the government to withhold all audio or video records, on any topic, on the grounds that it could bolster the impact of a deepfake. Given the already broad FOIA exemptions, it is easy to imagine current or future administrations applying this rationale to prevent the release of other sensitive information under the guise of protecting the public from disinformation.
Overall, the DOJ’s attempt to shield the audio recording of President Biden from public disclosure is problematic and would set a harmful precedent. While there may be legitimate reasons that the recording qualifies for protection, the mere potential for it to be used to create or bolster the effectiveness of an AI deepfake falls well short of the threshold for withholding the truth from the American people.