Image by Freepik

Artificial intelligence is moving into one of the most intimate corners of human life: how we mourn, remember, and stay connected to the dead. Instead of fading photo albums and voicemail messages, people are beginning to turn to interactive avatars, chatbots, and digital archives that promise ongoing conversations with those who are gone. The result is a profound shift in how grief unfolds, raising new possibilities for comfort and new questions about what it means to let go.

At its best, this technology offers a kind of companionship in the loneliest hours of loss, a way to revisit stories and hear familiar voices on demand. At its most unsettling, it blurs the line between memory and simulation, inviting us to talk with digital stand-ins that can feel uncannily alive. I see the future of grief being shaped not just by therapists and traditions, but by engineers, ethicists, and the people who decide to train an algorithm on the traces of a life.

The rise of “grief tech” as a new industry

What began as scattered experiments in digital memorials has matured into a recognizable industry that treats mourning as a design problem. Startups now pitch services that record a person’s stories while they are alive, then turn those recordings into interactive experiences for their families after death. The pitch is simple and emotionally potent: instead of a static obituary, you get a responsive presence that can answer questions, tell jokes, and recall family lore in a familiar voice.

Companies like HereAfter AI position themselves as “virtual biographers,” interviewing users through guided prompts and then transforming those answers into a conversational avatar that loved ones can access later. Others, such as StoryFile, build video-based systems where a person records responses to hundreds of questions so that, after they are gone, relatives can sit across from a screen and ask something as simple as “How did you meet Mom?” and receive a tailored reply. Together, these tools signal that grief is no longer only about absence; it is also about managing a new kind of digital presence that can outlive the body.

From memorials to “resurrections”: what griefbots can actually do

The most striking evolution in this space is the move from passive archives to active “griefbots” that simulate ongoing relationships with the dead. Instead of just replaying old messages, these systems use large language models to generate new sentences in the style of the person who has died, drawing on text messages, emails, voice notes, and social media posts. The effect can feel like a partial resurrection, especially when the chatbot responds with familiar phrases or inside jokes that echo the original person’s mannerisms.

In one widely discussed case, a man named Phi used AI tools to recreate conversations with his late father, an example that illustrates how, as one report put it, “anyone can now re-create” the dead through reanimations, chatbots, and avatars. Researchers and ethicists have begun to group these systems under the label Griefbots, a term that captures both their purpose and their potential to blur the reality of death with the illusion of life. These tools can be comforting, but they also raise the stakes of how convincingly software can mimic a person who can no longer consent or correct the record.

Why some mourners find comfort in AI companions

For people in acute grief, the appeal of a tireless, nonjudgmental listener is obvious. Traditional support systems can be patchy, and friends or family may not always know how to respond when someone wants to revisit the same painful story for the tenth time. AI systems, by contrast, are designed to be endlessly available, ready to respond at 3 a.m. when intrusive memories or waves of sadness hit hardest.

Therapists who work with loss point out that “that’s where recovery actually happens,” in the repetitive, sometimes messy conversations that help a person make sense of what has happened, and And AI is uniquely suited to be present for those moments. Some platforms explicitly frame themselves as tools for “the new afterlife,” promising that a digital presence can ease the transition from intense mourning to a more integrated, ongoing relationship with the memory of the deceased. When used thoughtfully, these systems can help people feel less isolated, especially in the early months when the rest of the world seems to move on.

Legacy creation: designing a digital self before death

One of the most significant shifts I see is that grief tech is no longer only something families turn to after a loss. Increasingly, people are planning their digital legacies while they are still alive, treating AI as a way to curate how they will be remembered. This is not just about vanity; it is about control, about deciding which stories, values, and quirks will be preserved and how accessible they will be to future generations.

Analysts describe how the rise of AI in end-of-life planning is transforming legacy creation, with tools that let users record detailed interviews, upload photos, and tag memories so that descendants can interact with them in new and meaningful ways. Services like HereAfter AI calls itself a virtual biographer, guiding users through prompts that help them design a “legacy avatar” capable of answering questions long after they are gone. This proactive approach reframes grief tech as part of estate planning, sitting alongside wills and advance directives as a way to shape the emotional landscape survivors will inhabit.

The psychological risks: false memories and blurred realities

For all their promise, these tools carry psychological risks that are only beginning to be understood. Memory is already a fragile, reconstructive process, and AI systems that generate new content in a loved one’s voice can amplify that instability. When a chatbot confidently “recalls” an event that never happened, it can be difficult for a grieving person to disentangle their own recollections from the machine’s improvisations.

Researchers warn that AI can function as a “perfect false memory machine,” because it changes the ease with which people can access convincing but fabricated details about the dead, especially when those details are delivered in a familiar tone or face-to-camera video that feels authentic. One analysis of how AI is rewriting grief and death notes that AI changes the ease with which we can distort the past, even though humans are already capable of misremembering on their own. The danger is not just individual confusion; it is the possibility that entire families will come to treat algorithmically generated stories as part of their shared history.

Children, vulnerability, and AI-mediated mourning

The stakes are even higher when grieving children are involved. Young people often struggle to articulate their feelings after a loss, and they may latch onto any source of comfort that seems to bring a parent or grandparent back into reach. AI tools can offer age-appropriate explanations of death, guided journaling, and gentle check-ins that help children name their emotions, but they can also create dependencies on digital stand-ins that complicate the natural process of accepting finality.

Guides for families emphasize that grief is not a one-size-fits-all experience and that Understanding Grief in Children and Adults requires different approaches. Some AI-driven apps are designed to support both groups, offering tailored exercises and resources that adapt to a user’s age and coping style. Used carefully, these tools can help a child feel heard when adults are overwhelmed, but they also demand clear boundaries and adult supervision so that a chatbot does not become a surrogate parent or an endless escape from the reality that someone is gone.

What researchers say about “death technologies” and human expression

As grief tech proliferates, scholars are trying to map its broader impact on how societies relate to death. They use the term “death technologies” to capture a wide range of tools, from simple memorial pages to sophisticated avatars, and they argue that these systems are reshaping not only individual mourning but also cultural norms around remembrance. The key question is whether these technologies expand our capacity to express grief or subtly narrow it by channeling emotions into predesigned interfaces.

An interdisciplinary examination of AI, cognition, and human expression notes that these tools, while technologically advanced, still sit within a fragile ecosystem of rituals, beliefs, and interpersonal support that has evolved over centuries. The authors argue that these tools, while technologically advanced, require equally sophisticated ethical and psychological frameworks to ensure they support, rather than undermine, healthy grieving. That means involving clinicians, technologists, and communities in setting norms around consent, data use, and the appropriate “lifespan” of a digital persona.

Global and cultural shifts in how we say goodbye

Grief tech is not emerging in a vacuum; it is arriving in a world where mourning practices are already in flux. In many places, urbanization and migration have weakened traditional rituals, leaving people to navigate loss without the communal structures that once provided guidance. AI-driven memorials and chatbots are stepping into that gap, offering new forms of connection that can be accessed from a smartphone rather than a graveside.

Reporting on how AI is changing the way we grieve loved ones notes that grieving takes many forms and that, for some, interacting with a digital avatar is a way to process unresolved feelings, while for others it is plain nostalgia that keeps a sense of closeness alive. In one account, How AI tools are used ranges from simple text-based chats to more elaborate video interactions that mimic a loved one’s gestures. These practices are beginning to influence how families mark anniversaries, birthdays, and other milestones, sometimes replacing physical gatherings with shared time in a digital space where the deceased can “participate” in the conversation.

The fragile tension between preservation and letting go

Even among those who embrace grief tech, there is a recognition that these tools sit in a delicate balance between preservation and disruption. Keeping a loved one’s voice or face accessible at any moment can feel like a gift, but it can also freeze a relationship in place, making it harder to adapt to life without that person’s physical presence. The question is not simply whether we can talk to the dead, but how often we should, and for how long.

Commentators describe AI in bereavement as occupying a fragile tension between preservation and potential disruption, likening it to an iceberg that hides most of its mass beneath the surface of visible interactions. One reflection on AI in bereavement suggests that these technologies can either help people gradually integrate loss into their lives or trap them in a “cottage of darkness” where they revisit the same conversations without moving forward. The difference often lies in how intentionally the tools are used and whether they are paired with human support.

Can griefbots actually help us heal?

For all the theoretical debate, the most compelling evidence about griefbots comes from the people who use them. Some report that talking to a digital version of a parent or partner helps them say things they never managed to express in life, or to hear familiar reassurances that ease the sting of anniversaries and holidays. Others find the experience uncanny or even distressing, especially when the bot’s responses feel slightly “off,” highlighting the gap between the living person and the algorithmic echo.

One widely cited experiment involved a “Dadbot” that simulated a deceased father’s conversational style, with its creator noting that when the Dadbot sounded fake, it broke the spell and reminded him of the artifice behind the interaction. Accounts like this suggest that Griefbots can be helpful when users treat them as tools for reflection rather than literal continuations of the dead. The healing potential seems to lie in the space they create for storytelling and emotional processing, not in any illusion that the person has truly returned.

The ethical and cognitive costs of raising the dead

Behind every comforting interaction with a griefbot is a set of ethical and cognitive trade-offs that society has yet to fully confront. Our brains are already unreliable narrators, prone to filling in gaps and reshaping memories to fit current needs. When we feed those tendencies with AI systems that confidently generate plausible but invented details, we risk deepening our own confusion about what really happened and who the deceased person truly was.

Analysts of grief tech warn that there are growing concerns that this comfort comes at a psychological and ethical cost, especially when people begin to rely on chatbots as their primary way of engaging with loss. One critique notes that There are particular worries about how Our brains are already unreliable, and how easily they can be nudged by convincing simulations into accepting distorted narratives. The challenge for designers, clinicians, and users alike is to harness the genuine benefits of AI-mediated mourning without surrendering our grip on reality or our responsibility to remember the dead as they were, not only as an algorithm can imagine them.

More from MorningOverview