Image Credit: Ratel - CC BY-SA 4.0/Wiki Commons

The inventor of a 3D-printed “suicide pod” now wants artificial intelligence to decide who is allowed to use it, shifting one of medicine’s hardest judgments from human hands to software. Philip Nitschke, long a lightning rod in the right-to-die movement, is pitching AI screening as a way to make assisted dying more consistent and less bureaucratic, even as critics warn it could turn life-and-death decisions into an opaque algorithmic process. I see his latest proposal as a stress test for how far societies are willing to let AI arbitrate the most intimate human choices.

At the center of the controversy is Sarco, a sleek capsule that promises a peaceful death at the push of a button, and a new generation of devices that add AI “mental tests” and even synchronized deaths for couples. The technology is arriving faster than the ethical and legal frameworks around it, leaving regulators, clinicians, and ethicists scrambling to catch up.

From Sarco pod to AI gatekeeper

Philip Nitschke built his reputation by pushing the boundaries of assisted dying, and his latest move is to push AI into that space as well. The original Sarco capsule, a 3D-printed pod that fills with nitrogen to induce hypoxia, was already marketed as a way for people to end their lives without a doctor present, and Nitschke has framed it as a kind of high-tech autonomy for those who want control over their final moments. In recent interviews, he has argued that an AI system should decide who is eligible to use such a device, replacing traditional psychiatric and medical assessments with automated screening that he believes could be more accessible and less biased, a vision that has put him at the center of a new debate over how far AI should reach into end-of-life care, as highlighted in coverage of Philip Nitschke.

Earlier this year, reporting on the Sarco device described how a “controversial assisted dying device” was being upgraded with AI, underscoring Nitschke’s belief that software can shoulder some of the moral and clinical weight that currently falls on doctors and legal panels. The pod itself, known simply as Sarco, has been presented as part of a broader experiment in using automation to streamline the path to assisted death, and Nitschke’s insistence that AI should decide who can end their life marks a sharp escalation from using technology as a tool to using it as a gatekeeper.

AI “mental tests” and the Double Dutch upgrade

The most concrete expression of this shift is a new AI-powered mental assessment that would run before the pod can be activated. In Switzerland, a version of the device has been described as a Swiss suicide pod that adds an AI mental test to judge whether a user is “fit” to proceed, with the system capable of delaying access for up to 24 hours. Critics quoted in that reporting questioned why AI is needed at all, arguing that the technology risks trivializing or mechanizing what should be a careful, human-led evaluation of mental capacity and informed consent.

Nitschke has also been promoting a new model nicknamed Double Dutch, which he says will integrate the AI software directly into the pod’s workflow. In one account, he explained that “with the new Double Dutch, we’ll have the software incorporated, so you’ll have to do your little test” before the device can be activated, a description that makes clear the AI is not an optional add-on but a mandatory gate. Another report on the same upgrade described how the 64-yea old user whose case drew attention last year helped spur Nitschke to formalize these tests, suggesting that real-world controversies are directly shaping the design of the AI checks.

“Die together” pods and the couples’ dilemma

Alongside the AI screening, Nitschke is also promoting a feature that allows couples to die at the same time, a development that raises its own ethical and technical questions. Reporting on the new AI-powered feature described how the Controversial inventor has designed the system so that two pods can be synchronized, allowing couples who want to die together to activate their capsules simultaneously. The idea is marketed as a compassionate option for partners facing terminal illness or unbearable suffering, but it also multiplies the complexity of consent, coercion, and timing, especially if one partner’s mental state is less clear-cut than the other’s.

Nitschke has said he has received interest from couples who wish to die together, including at least one pair who contacted him through UK media, and he has framed the AI checks as a way to ensure each person independently passes a mental test before the device can be activated. Coverage of these plans noted that the announcement has reignited debate over whether such devices could still attract criminal charges, even in jurisdictions with permissive assisted dying laws, and that the “die together” concept has put Sarco back in the spotlight as a symbol of how far right-to-die technology might go, as detailed in reports that quoted Speaking Nitschke on the renewed scrutiny.

Bioethics: Three scenarios for AI at the end of life

What Nitschke is proposing does not exist in a vacuum, and bioethicists have been sketching out how AI might fit into end-of-life decisions more broadly. One influential analysis laid out Three Scenarios for in End of Life Decisions, ranging from AI as a decision-support tool that helps clinicians interpret complex data, to AI as a co-decision-maker that shares responsibility, and finally to AI as an autonomous decider that effectively replaces human judgment. Nitschke’s vision of an AI that decides who can enter a suicide pod clearly leans toward that third scenario, where software is not just advising but ruling on eligibility.

In that framework, the key questions are about accountability and error: if an AI system wrongly approves or denies a request for assisted death, who bears responsibility, and how can such mistakes be detected and corrected in time. The same analysis warned that when AI is used in matters as significant as life and death, the opacity of algorithms and the difficulty of auditing their decisions can become a central ethical problem, especially if the system is trained on data that reflects cultural or medical biases. Nitschke’s push to embed AI into devices like Sarco and Double Dutch, and to let it decide who is allowed to proceed, effectively tests the most controversial of those scenarios in real-world practice, rather than keeping it as a theoretical thought experiment about End of Life Decisions.

Law, medicine, and the next assisted-dying device

For lawmakers and clinicians, the arrival of AI-equipped suicide pods forces a collision between emerging technology and existing assisted dying frameworks. Traditional laws in places that allow euthanasia or physician-assisted suicide typically require human doctors to assess capacity, confirm diagnoses, and document consent, processes that are slow and heavily regulated. By contrast, Nitschke’s devices are pitched as consumer-facing products that can be activated after an automated mental test, a model that could sidestep established safeguards and leave regulators scrambling to decide whether such pods fall under medical, consumer, or entirely new categories of oversight, a tension that has been noted in coverage of the Creator of the New Assisted Dying Device.

At the same time, the broader medical community is wrestling with how to integrate AI into end-of-life care in ways that support, rather than replace, human judgment. Some clinicians see potential in using AI to flag patients who might benefit from palliative care earlier, or to help standardize capacity assessments, but they generally stop short of endorsing fully automated decisions about who may die. Nitschke’s insistence that AI should decide who can end their life, and his move to embed that logic into Sarco, Double Dutch, and other New Assisted Dying Device concepts, pushes the conversation to an extreme that many ethicists regard as an “ethical disaster waiting to happen,” as reflected in critical coverage of Nitschke and his AI-powered suicide chamber.

More from Morning Overview