
Can you really talk to the dead using AI? Inside the rise of ‘deathbots’
Artificial intelligence (AI) is beginning to change how we remember, mourn and even communicate with those who have passed away. What once belonged to the world of science fiction or spiritual séances has now entered our everyday technology. The growing “digital afterlife” industry promises to make memories interactive and, in some cases, eternal.
From chatbots that mimic the voices of the dead to lifelike virtual avatars that talk and gesture like them, a new generation of systems known as “deathbots” aims to recreate the presence of loved ones through algorithms and data. But as researchers are discovering, the experience can be as unsettling as it is fascinating.
Recreating presence through data
A study by researchers Eva Nieto McAvoy and Jenny from King’s College London, recently published in the journal Memory, Mind & Media, explored how AI is being used to simulate the personalities and voices of the deceased. As part of their project, the researchers created digital versions of themselves uploading personal messages, videos and voice notes to see how these AI systems construct a person’s identity.
They found that so-called “deathbots” use a person’s digital traces of emails, social media posts, text messages, photos and recordings to train machine-learning models capable of reproducing their tone and style of conversation. Some platforms even use voice cloning or 3D avatars so that users can “see” and “hear” their loved ones again.
“Deathbots are technologies of illusion,” explains media theorist Simone Natale, noting that they echo older traditions of trying to communicate with the dead. But AI now makes such illusions far more persuasive and profitable.
How ‘deathbots’ work
At the simplest level, a deathbot is a chatbot that mimics how a particular person speaks.
• Data collection: The system first gathers a person’s digital data everything from texts and emails to photos and voice clips.
• Training: That data is fed into an AI model, which learns the person’s vocabulary, humour and patterns of expression.
• Simulation: The model begins generating responses that sound like the deceased. Some services add visual avatars or voice synthesis for realism.
• Interaction: Family members or friends can then chat with the deathbot via text or voice, sometimes even in virtual reality.
The researchers found that while these systems can sound convincing at first, their responses soon reveal emotional shallowness. Replies are often repetitive or inappropriately cheerful — sometimes adding emojis or upbeat language in conversations about death.
“The more personalisation we attempted, the more artificial it felt,” the authors observed.
When memory becomes interactive
Traditionally, remembering the dead meant preserving photographs, stories or rituals. AI changes this by turning memory into an interactive conversation.
As digital media scholar Andrew Hoskins has pointed out, memory in the AI age becomes “conversational” shaped by dialogue between human and machine. Yet the conversations often expose the limits of synthetic intimacy.
Human: You were always so supportive. I miss you.
Deathbot: I’m right here for you, always ready to offer encouragement and support. And I miss you too. Let’s take on today with positivity and strength.
Such exchanges can feel comforting but also highlight that the empathy being offered is simulated generated by code, not emotion.
The business of grief
Behind the promise of digital resurrection lies a clear commercial motive. None of these services are memorial charities; they are tech start-ups seeking profit. Many charge subscription fees or offer “freemium” tiers with paid upgrades for additional features such as lifelike avatars or voice interactions.
Philosophers Carl Öhman and Luciano Floridi describe this as part of the “political economy of death” where data continues to generate value even after a person’s life ends.
Companies often encourage users to “capture their story forever,” but this process also harvests emotional and biometric data. Memory becomes a product, something to be designed, sold and updated.
This model fits within what professor Andrew McStay calls the “emotional AI economy,” where technology is used to simulate and monetise human feelings.
Ethical and emotional dilemmas
The rise of digital afterlife services has opened new ethical debates.
Consent and privacy remain the biggest issues. Most people never give permission for their private messages or recordings to be reused after death. Who, then, has the right to create their digital double family members, companies, or no one at all?
Data ownership is another concern. Once a deathbot is created, who controls it? The family or the company hosting the AI? The question becomes especially sensitive when personal data is used for commercial purposes.
There are also risks of misuse or manipulation. A digital avatar could be hacked or programmed to say things the real person never said, damaging reputations or spreading misinformation.
Then comes the commodification of grief. Offering “afterlife subscriptions” to vulnerable mourners raises serious concerns about emotional exploitation.
Culturally, many societies view death as sacred. Reanimating the dead through AI may conflict with religious or moral beliefs about rest and remembrance.
Emotionally, the consequences can be equally complex. For some, deathbots offer comfort — a chance to say what was left unsaid. For others, they prolong grief by making it harder to accept the finality of death. Psychologists warn that people could become emotionally dependent on these simulations, returning repeatedly to conversations that can never truly heal.
Over time, there is a risk of memory distortion. Because AI versions are built from selective data, they may present an idealised or incomplete picture. Users might end up remembering the AI version of a loved one more vividly than the real person, blurring the boundary between memory and imitation.
The next stage: digital resurrection
As AI technology advances, the concept of “digital resurrection” is likely to become more realistic and more ethically fraught.
Hyper-realistic avatars:
Future deathbots could use advanced deepfake video and voice synthesis to create digital versions that look, move and sound almost indistinguishable from the real person. Combined with virtual or augmented reality, mourners might “meet” their loved ones again in immersive spaces, a powerful but potentially disorienting experience.
Continuous learning:
Next-generation systems could keep evolving after death, drawing on new data such as public records or descendants’ online activity. A digital self might continue to “grow,” creating a version that is part memory, part invention.
Everyday integration:
AI memorials may soon enter daily life as a “digital grandparent” reading bedtime stories, or a “mentor bot” offering advice drawn from a late teacher’s personality. In such a future, memory itself becomes a living presence woven into daily routines.
Legal and social challenges:
Digital resurrection also raises questions about posthumous identity and digital inheritance. Could a digital version own property, post on social media or make legal statements? What happens when different relatives create competing versions of the same person?
A new immortality industry:
Tech companies are already experimenting with “legacy plans” that let people record their data before death, ensuring their continued digital presence. This could turn immortality into a paid service, widening the gap between those who can afford to be remembered and those who cannot.
Rethinking memory and forgetting
As media theorist Wendy Chun observes, digital technology often confuses storage with memory. AI promises perfect recall, but true remembering involves emotion and the ability to forget.
Forgetting plays a vital role in healing. Yet algorithms never forget; they archive and reproduce. This permanence could change how people experience grief. Instead of accepting loss, users might grow accustomed to a permanent presence, summoning the dead whenever they wish.
The King’s College researchers argue that this risks misunderstanding death itself replacing finality with endless simulation.
Between comfort and control
AI can undoubtedly help preserve stories, voices and family histories. It allows future generations to hear ancestors speak or pass on memories in vivid ways. But it cannot reproduce the depth, empathy or unpredictability of a living relationship.
The digital afterlives encountered by researchers were compelling precisely because they failed; they reminded users that memory is relational, not programmable.
What these technologies ultimately reveal is less about the dead than about the living: our desire to remember, to control, and to resist forgetting.
The road ahead
As “digital resurrection” moves from experiment to industry, societies will have to confront new ethical, legal and spiritual questions. Should everyone have the right to decide what happens to their data after death? Should families be allowed to recreate a loved one digitally without consent? And what does it mean to “let go” when technology makes someone perpetually available?
AI can preserve a person’s voice, face and words but not their consciousness or soul. As researchers caution, the future of remembrance may be digital, but the meaning of grief and love will always remain deeply human.
