Deepfake videos impersonating people have become much more convincing. Now, they can bring the dead into courtrooms to testify to the judges. Precisely that occurred in an Arizona court in April 2025.
An Arizona family used artificial intelligence to recreate the “live” video and voice of their deceased loved one. That allowed him to deliver his own victim impact statement in court, according to ABC 15 TV.
Chris Pelkey, a 37-year-old devoutly religious Army veteran was fatally shot during an apparent road rage incident in November 2021. Chandler, Arizona, police say that following a traffic dispute, Pelkey exited his car at a stop light. Gabriel Horcasitas, 54, shot and killed him as he approached.
Horcasitas was found guilty of manslaughter and endangerment in a 2023 trial. In court during sentencing, an AI-generated deepfake video was played of Pelkey discussing the effects of his death on himself and others.
Consider: Why would someone want a deepfake video of a homicide victim to speak to the sentencing judge about how the killing deprived him of a life and about the effects of the killing upon the surviving family and others? The family and the prosecutor would be seeking a harsher penalty imposed. They’re not calling out cold logic and intellectual reasoning. They’re trying to move the judge’s feelings and emotions by making the judge virtually face and hear from the very person who was killed.
That tactic worked in the Arizona case: No surprise – the judge increased Horcasitas’ prison term one extra year up from the prosecutors’ recommended 9.5 year-prison term. But the tactic should never be allowed in court – ever.
High emotions lead to higher penalties
Research studies have shown that a victim impact statement tends to move mock juries to impose more severe sentences. One study showed that when a crime victim was still alive and could testify to the impact upon him or her, mock juries tended to sympathize with the victim and against the convicted criminal when deciding punishment.
In this case, the tactic can be shown to be a calculated emotional deception. Reportedly, the AI-impersonation looked and sounded like Pelkey. His sister and brother-in-law fed video and audio of him into an AI system to create an audio-visual incarnation that could be represented as speaking Pelkey’s thoughts and feelings were he still alive.
Emotional appeals are usually disfavored
All relevant and reliable evidence is admissible in a trial unless excluded by a rule. But a direct appeal to a jury’s or judge’s emotions is ordinarily not allowed when the court is determining guilt. Rule 403 of the Federal Rules of Evidence (FRE) authorizes a judge to
exclude relevant evidence if its probative value is substantially outweighed by a danger of one or more of the following: unfair prejudice, confusing the issues, misleading the jury, undue delay, wasting time, or needlessly presenting cumulative evidence.
Indeed, judges instruct juries to decide the facts only from evidence in the trial. Jurors are typically directed not to be influenced by “any person’s race, color, religious beliefs, national ancestry, sexual orientation, gender identity, gender, or economic circumstances,” nor by “personal likes or dislikes, sympathy, prejudice, fear, public opinion, or biases.”
But notice: Victim impact statements encourage the judge or jury to impose a penalty based on the jurors’ personal reactions and feelings after hearing from the families and others. What gives?
How victim impact statements came to be accepted
The Supreme Court’s Payne v. Tennessee (1991) decision opened the way for the states to decide whether victim impact statements (VIS) would be allowed in sentencing hearings, and what kind of information could be included. Right now, many or most states allow victims’ families to make all kinds of emotional appeals to the court to increase the convict’s penalties. FRE 403 and similar state rules do not restrict the content much or at all.
Individual judges hearing specific cases can issue rulings trying to limit the broadest and most prejudicial victim impact testimony. But in practice the victims’ testimony can’t avoid being an emotional appeal. Ultimately, the Payne decision allows judges and juries to decide penalties based upon what they feel is fair. The outcomes don’t have to be principled in an objective sense.
The American legal system aims to penalize convicted persons fairly in proportion to the type of crime and the harms caused. VIS work only toward increasing penalties. They do provide a deeper and broader sense of whom the crime harmed and how much. But does that always yield justice?
The AI videofake in the Arizona case injected a computer-generated VIS. Does an AI VIS engender fairness, or merely stimulate sad and angry emotions?
Unequal penalties due to unequal concerns
Considering VIS information for sentencing creates an invisible injustice by treating convicted people vastly unequally. If a well-connected, perhaps wealthier and more widely appreciated person is murdered, the family can obtain lots of VIS and now also an AI deepfake video, all aiming to anger the judge or jury and increase the perpetrator’s punishment.
In stark contrast, the unknown stranger, homeless person or prostitute who is murdered likely has no available family, no VIS submissions, and no access to deepfake video. The judge or jury is all the less likely to care about the deaths of these comparatively invisible human beings; their killers would likely not receive a penalty flowing from VIS-driven grief, outrage and anger.
Notably, the leading Supreme Court decision before Payne was Booth v. Maryland (1987). The Booth decision fully credited a murder victim’s family’s “grief and anger,” and recognized “that jurors generally are aware of these feelings.” But Booth saw the downside of unconstrained VIS (italics added):
[T]he formal presentation of this information by the State can serve no other purpose than to inflame the jury and divert it from deciding the case on the relevant evidence concerning the crime and the defendant. … any decision to impose the death sentence must be, and appear to be, based on reason rather than caprice or emotion.
Overruling Booth, the Payne precedent opened the courtroom more to emotions than ever before. Despite long-standing efforts by courts to confine decisions to sober facts, VIS testimony and documents make it common practice to inject strong emotions into sentencing decisions.
An AI deepfake isn’t a competent witness
AI-produced VIS testimony doesn’t come from a person and thus isn’t from a competent witness. Family members can testify about their feelings; the deepfake has no feelings. A judge or jury empathizing with a deepfake yields Twilight Zone results.
Bringing the crime victim back from the dead to deliver a personal VIS, using a lifelike deepfake video, ratchets up the reign of emotions over reason in a courtroom decision even higher. More than ever before, feelings become the key facts. I, for one, entirely oppose using human impersonation deepfakes at all, but especially in the courtroom where another human being’s fate, liberty and life will dangle like a yo-yo in AI’s hands.