AI‑Generated Deadbots Reach Canadians Without Oversight
Nearly 80% of Canadians use AI unchecked; posthumous avatars raise grief, privacy and ethical concerns.

TL;DR
Most Canadians employ AI without institutional control, and the rise of AI‑driven “deadbots” is sparking new grief and ethical dilemmas.
Context Artificial intelligence now recreates deceased individuals by stitching together social‑media posts, photos, audio and video. These digital avatars—sometimes called “deadbots” or “griefbots”—can converse, remember, and even appear in virtual reality. The technology blurs the line between memory and simulation, prompting questions about consent, privacy and mental health.
Key Facts - A recent survey shows that 78 % of Canadian workers use AI tools without any formal oversight from employers or regulators. - The documentary *Meeting You* follows Jang Nayeon’s mother, who allowed a posthumous avatar of her daughter to be created after Nayeon’s 2016 death. The live‑streamed reunion unsettled viewers, highlighting the discomfort of public mourning. - Only a small fraction of deadbot deployments occur under specialist supervision, such as therapists who might use them for grief counseling. Most users interact with these avatars independently, increasing the risk of unhealthy dependence. - Research on grief outcomes distinguishes correlation from causation: exposure to unsupervised avatars correlates with higher reports of prolonged grief, but does not prove the technology causes it.
What It Means The unchecked proliferation of AI in the workplace means many Canadians lack guidance on responsible use, including the handling of personal data that fuels deadbots. Without clear consent mechanisms, digital traces left on public platforms can be harvested to build avatars of the deceased, potentially violating the autonomy of both the departed and their families. For users, the allure of “talking” to a lost loved one may provide short‑term comfort but can also delay healthy grieving. Pathological grief—characterized by persistent, intense distress beyond a year—has been linked to unsupervised interaction with these avatars. Policymakers and employers should consider mandatory oversight frameworks that address data privacy, consent, and mental‑health safeguards. Health professionals might develop protocols for therapeutic use, ensuring that any avatar‑based intervention is evidence‑based and monitored.
Looking Ahead Watch for upcoming regulations on AI‑generated posthumous content and for clinical trials testing therapist‑led deadbot interventions for grief support.
Continue reading
More in this thread
Half‑Million UK Biobank Records Listed on Alibaba, Officials Say Leak Will Persist
Dr. Priya Sharma
UK Biobank Health Records Still for Sale on Alibaba, Government Says
Dr. Priya Sharma
Trump’s USAID Dismantling Shifts Health Aid Costs to African Governments
Dr. Priya Sharma
Conversation
Reader notes
Loading comments...