Digital Ghosts: How AI Deepfakes of the Deceased Are Challenging Consent, History, and the Future of Grief

12 Min Read

The rapid rise of AI deepfakes now extends to the deceased, creating ‘digital ghosts’ that reanimate historical figures and lost loved ones. This technological leap, while offering poignant connections to the past, raises profound ethical questions about consent beyond the grave, the potential for historical misinformation, and how society will navigate the complex future of grief and digital legacies.

The advent of generative artificial intelligence has brought with it an astonishing capability: the power to reanimate the past. From flickering black-and-white photos given new life to entirely fabricated video scenarios, AI deepfakes are creating “digital ghosts” of those who have died. This innovation, while fascinating, is igniting urgent conversations across tech communities about the moral implications, the integrity of history, and the very nature of human grief.

Companies like OpenAI, with tools such as Sora, have implemented guardrails for the likenesses of living individuals, allowing users to adjust their availability in AI-generated videos. However, protections around the digital presence of the deceased often appear to be an afterthought, leaving their families and historical records vulnerable to manipulation and disrespect. As these technologies become more accessible, the challenge is no longer a niche concern but a pervasive societal dilemma.

The Allure and Anxiety of Reanimating History

The desire to bring the past vividly to life is not new. From historical reenactments of the Civil War to filmmaker Peter Jackson’s painstaking restoration and colorization of World War I footage in “They Shall Not Grow Old,” humans have long sought to connect with previous generations. These efforts were typically expensive and time-consuming, demanding significant resources and careful curation.

Deepfake technology, however, democratizes this process. What once required a professional film crew or years of research can now be achieved with widely available tools, animating old photographs or crafting convincing fake videos from scratch. For instance, in 2021, the Israel Defense Forces collaborated with a synthetic video company to reanimate photos from the 1948 Israeli-Arab War, allowing young soldiers in old pictures to blink and smile, creating an uncanny, Harry Potter-esque encounter with history, as reported in The Conversation.

While such applications can foster empathy and an intergenerational connection—echoing the 18th-century statesman Edmund Burke’s view of society as a “partnership not only between those who are living, but between those who are living, those who are dead and those who are to be born”—they also carry immense risks. The most obvious is the wholesale fabrication of history. Imagined events can have real-world consequences, as seen with the debated 14th-century Battle of Kosovo fueling anti-Muslim sentiments, or the entirely fabricated second Gulf of Tonkin attack escalating U.S. involvement in Vietnam.

Beyond intentional falsehoods, experts worry about the subtler consequences, such as the atrophying of imagination. If AI-generated depictions become the default “stand-ins” for historical events, might viewers gain a false impression of knowing “exactly what happened,” thus obviating the need for deeper historical inquiry? As Nir Eisikovits, director of UMass Boston’s Applied Ethics Center, observes, technology often makes life easier but also causes existing skills to deteriorate, potentially dulling our capacity for ordinary judgments and belief in human rights.

Grief Bots and the Digital Afterlife Industry

The reanimation of the deceased isn’t confined to historical figures; it’s rapidly extending to personal grief. Companies like Silicon Intelligence and Super Brain now offer “grief bots” or “dead bots” that use generative AI to sift through personal data—text, photos, audio, video—to create interactive digital replicas of loved ones who have passed away. This technology creates an “illusion that a dead person is still alive,” challenging our understanding of death itself, according to Katarzyna Nowaczyk-Basińska, a researcher at the University of Cambridge, in an interview with Science News.

The concept, once a chilling plotline in the 2013 Black Mirror episode, is now a reality. Nowaczyk-Basińska and technology ethicist Tomasz Hollanek explored the risks of this “digital immortality” in their paper published in Philosophy & Technology. Their research delves into several troubling scenarios:

  • Harm to Children: Imagine a terminally ill parent leaving a grief bot for their eight-year-old child. Experts warn about the unknown psychological impact on children, who may not fully grasp the bot’s artificial nature.
  • Lack of Consent for Survivors: A person secretly commits to a 20-year subscription for a bot of themselves to comfort their adult children. After the funeral, the children receive invitations to interact with the bot. Do they have the right to decline this form of grieving?
  • Commercial Exploitation: Grief bots could become new avenues for “sneaky product placement,” as demonstrated by a scenario where a grandmother’s recipe bot recommends a food delivery service instead of providing the actual recipe. This encroaches upon the dignity of the deceased and disrespects their memory.

These examples highlight that without proper safeguards and ethical considerations, the burgeoning “digital afterlife industry” risks exploiting vulnerable individuals during their most difficult moments.

The Unsettling Reality: Manipulating Legacies and Spreading Misinformation

The reach of deepfake technology extends to manipulating the legacies of public figures and deceased celebrities. Zelda Williams, daughter of the late Robin Williams, has vocally opposed AI-generated videos of her father, calling them “over-processed imitations” that are “disrespectful” and “degrading,” as she shared on Instagram. Similarly, Bernice King, daughter of Martin Luther King Jr., and the daughter of comedian George Carlin have also spoken out against deepfakes of their fathers, whose images and speeches have been manipulated on platforms like Sora.

Legal challenges abound. As attorney Adam Streisand, who has represented celebrity estates like Marilyn Monroe’s, notes, while California courts have long protected celebrities from unauthorized likeness reproductions, the sheer volume and ease of AI generation create a “5th dimensional game of whack-a-mole” for legal processes.

The problem is exacerbated by AI “hallucinations”—instances where AI generates untrue or unexpected results not backed by real-world data. This can lead to horrifying outcomes, such as deepfakes of child murder victims circulating on platforms like TikTok, often with incorrect details about their age, race, or circumstances. These emotionally charged, fabricated videos, as seen with depictions of Royalty Marie Floyd, are designed to trigger strong reactions for clicks and likes, causing immense emotional distress to victims’ families and muddying the waters of factual reporting.

Ultimately, such pervasive use of deepfakes risks destabilizing the very idea of a historical “event” and eroding trust in all forms of media. As Liam Mayes, a lecturer at Rice University, suggests, we might see “trust in all sorts of media establishments and institutions erode” if people can no longer discern real from fabricated content.

The Quest for Safeguards and Ethical Frameworks

Recognizing these growing concerns, technology companies are grappling with how to regulate these powerful tools. OpenAI, for example, has stated that for recently deceased public figures, authorized representatives or estate owners “can request that their likeness not be used in Sora cameos.” OpenAI CEO Sam Altman has also indicated the company’s intent to “give rightsholders more granular control over generation of characters,” according to an OpenAI system card.

However, the rapid pace of AI development means policies often lag behind capabilities. To combat the proliferation of deepfakes, several technical safeguards are being explored:

  • Invisible Signals and Watermarks: OpenAI implements invisible signals, visible watermarks, and metadata to identify Sora-created content. However, experts like Sid Srinivasan of Harvard University note that visible watermarks are easily removable by determined actors.
  • AI Detecting AI: Companies like Reality Defender use AI to identify AI outputs, recognizing patterns that humans cannot. McAfee’s Scam Detector similarly analyzes audio for AI fingerprints.
  • Platform Collaboration: Wenting Zheng of Carnegie Mellon University suggests that OpenAI should share its detection tools with social media platforms to assist in identifying AI-generated content automatically.

Despite these efforts, challenges remain. The quality of deepfakes continues to improve, making detection increasingly difficult. Language barriers also persist, with AI tools being vastly more capable in widely used languages like English, Spanish, or Mandarin. The urgency for comprehensive regulations and safeguards is palpable, as Nowaczyk-Basińska emphasizes, stating that it is “risky to let commercial entities decide how our digital death and digital immortality should be shaped.”

The rise of AI deepfakes of the deceased presents a profound dichotomy: an exciting new way to connect with the past and an alarming potential to distort it, commodify grief, and disrespect individual legacies. For the technology community, this isn’t just a technical challenge but a deep ethical one that reshapes our understanding of consent, historical truth, and the very meaning of death.

As poet W.H. Auden noted, humans are “always adroiter with objects than lives,” often inventing things without fully comprehending their long-term impact. The capacity to imagine the “underside of technical achievements” is crucial now more than ever. It demands a collective effort to slow down, engage in thoughtful dialogue, and build robust ethical frameworks and legal protections that can evolve as quickly as the technology itself. Only then can we hope to harness the possibilities of AI while safeguarding the dignity of the dead and the integrity of our shared history.

Share This Article