Image Credit: Web Summit - CC BY 2.0/Wiki Commons

Warnings about artificial intelligence used to sound like science fiction. Now they are shaping conversations in the Vatican, in Silicon Valley boardrooms, and in Washington, as researchers argue that advanced systems could destabilize economies and even threaten humanity if left unchecked. One of the loudest voices in that debate is a physicist who has spent the past decade trying to pull religious leaders, tech billionaires, and governments into the same urgent conversation about AI safety.

Instead of private letters to the Pope or Elon Musk, his influence has come through public campaigns, open letters, and a steady push to make existential risk part of mainstream policy. I trace how that strategy emerged, what it has achieved so far, and why it still collides with political resistance even as AI systems grow more capable.

The physicist who turned AI risk into a public cause

The central figure in this story is Max Tegmark, a Swedish American physicist who built his academic reputation in cosmology before turning his attention to the future of intelligence. I see his trajectory as a case study in how a scientist can move from abstract theory to direct political advocacy, arguing that the same analytical tools used to model the universe should be applied to the risks posed by powerful algorithms. That shift has made him a bridge between technical researchers, tech executives, and policymakers who often talk past one another when they discuss AI.

Max now works as an MIT professor and president of the Future of Life Institute, a nonprofit organization that works to reduce existential risks facing humanity. In that dual role he has argued that AI should be treated alongside nuclear weapons and climate change as a technology that could reshape civilization for better or worse. His academic credentials give him access to researchers building the systems, while his advocacy work pushes him into rooms with political leaders and industry figures who control how those systems are deployed.

From cosmic questions to existential risk

What makes Tegmark unusual is not just that he worries about AI, but that he frames it as part of a broader question about the long term survival of intelligent life. In his view, the same curiosity that drives cosmologists to ask why the universe exists should drive societies to ask whether their technologies are compatible with a flourishing future. That perspective turns AI safety from a narrow technical problem into a civilizational one, which is precisely why he has tried to enlist voices far beyond the usual tech policy circles.

In interviews he has described how a kind of “put up or shut up” moment pushed him from theoretical discussions into concrete projects, leading him to help build an institution that could work full time on these issues. As president of the Future of Life Institute, he has backed research on AI alignment, organized conferences that bring together computer scientists and philosophers, and supported campaigns on other existential risks such as nuclear war. The throughline is a belief that humanity should treat its own survival as a scientific and moral priority, not an afterthought.

How Future of Life turned AI safety into a movement

From my perspective, the Future of Life Institute has functioned as Tegmark’s main lever for influencing the global AI debate. Rather than relying on quiet lobbying, the group has specialized in highly visible interventions that force powerful actors to respond, whether they like it or not. That strategy has included public letters, policy proposals, and collaborations with high profile scientists and entrepreneurs who can command attention in their own right.

The institute’s mission is explicitly focused on reducing existential risks, and it treats advanced AI as one of the most pressing of those threats. Through Future of Life, Tegmark and his colleagues have funded technical work on making AI systems more robust and transparent, while also pressing for governance frameworks that would slow or halt deployments they see as reckless. The organization’s campaigns are designed to reach not only engineers and regulators but also cultural and moral authorities, from religious leaders to tech magnates, who can shift public norms around what counts as responsible innovation.

The 2015 open letter that rallied Hawking and Musk

The turning point in Tegmark’s public profile came when he helped organize an open letter on artificial intelligence that framed superhuman AI as both an opportunity and a danger. I see that document as a blueprint for his broader strategy: use a concise, accessible statement to crystallize expert concern, then attach the names of people whose reputations make it impossible to ignore. The letter did not claim that catastrophe was inevitable, but it insisted that serious research on safety and control had to keep pace with rapid progress in capabilities.

According to the Background of that open letter, by 2014 both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman AI could pose a threat to humanity. Their signatures, alongside those of leading machine learning researchers, signaled that concern about AI risk was no longer confined to fringe theorists. The letter, which was made public on January 12, called for research that would ensure AI systems remain beneficial, and it helped move the phrase “AI safety” from obscure academic papers into mainstream coverage and policy discussions.

Why Tegmark wants global rules, not just corporate promises

As AI systems have grown more capable, Tegmark has argued that voluntary commitments from companies are not enough. In my reading of his work, he sees a structural problem: firms are locked in a race to deploy increasingly powerful models, and even well intentioned executives will struggle to slow down if their competitors do not. That logic pushes him toward international agreements that would bind both governments and corporations, similar to arms control treaties that limit nuclear weapons.

In a profile that listed him among influential AI figures, Tegmark warned that the U.S. government currently opposes an international treaty that would force adversaries to adopt safety standards, even as he expects AI systems to match or exceed human performance on many tasks within a few years. He framed this as a dangerous mismatch between the speed of technical progress and the pace of political response, arguing that without binding rules, the world risks sliding into an unstable competition over increasingly autonomous systems. His call for such a treaty, described in that TIME profile, underscores his belief that AI safety must be treated as a matter of national and international security, not just corporate ethics.

Courting moral authority: why the Vatican and tech titans matter

Although the available sources do not document Tegmark sending specific letters to the Pope or to Elon Musk, his broader strategy clearly aims to enlist both moral and technological authority in the AI safety cause. From my vantage point, appealing to religious leaders such as the Pope serves a distinct purpose: it reframes AI not only as a technical or economic issue, but as a question about human dignity, justice, and the value of life. When faith communities debate automation, surveillance, and algorithmic bias, they bring ethical frameworks that can challenge purely profit driven narratives.

Engaging tech magnates like Elon Musk serves a different but complementary role. Musk’s early public statements about AI risk, highlighted in the Background to the 2015 open letter, helped legitimize concerns that might otherwise have been dismissed as speculative. By aligning his campaigns with figures who build and fund cutting edge systems, Tegmark has tried to show that calls for caution are coming from inside the innovation ecosystem, not just from outside critics. That mix of moral and technical endorsement is central to his effort to shift public expectations about what responsible AI development should look like.

Inside Tegmark’s advocacy playbook

Looking across Tegmark’s work, I see a consistent pattern in how he tries to influence the AI agenda. First, he identifies a concrete risk, such as autonomous weapons or unaligned superintelligence. Then he convenes a coalition that spans disciplines and ideologies, from machine learning researchers to philosophers and entrepreneurs. Finally, he distills their shared concerns into public statements or campaigns that are simple enough to travel widely but specific enough to guide policy and research priorities.

His role as an MIT professor and president of a risk focused nonprofit gives him both credibility and institutional backing for this approach. Through the Future of Life Institute he has supported grants for technical work on AI alignment, while also pushing for political measures such as moratoriums on certain applications and stronger oversight of large scale training runs. The combination of academic research, public campaigning, and coalition building is what has allowed a physicist to shape debates that now reach from the Vatican to the headquarters of major AI labs.

The limits of influence in a rapidly accelerating race

For all of Tegmark’s visibility, the impact of his campaigns remains constrained by geopolitical and commercial realities. AI development is now a strategic priority for major powers, and companies are investing billions of dollars in models that can write code, generate images, and analyze data at scale. In that environment, calls for strict limits or international treaties can sound, to some policymakers, like unilateral disarmament. Tegmark’s argument is that the opposite is true, and that failing to coordinate on safety will leave everyone more vulnerable.

The tension is visible in his criticism of the U.S. government’s reluctance to pursue binding global rules, as described in the TIME profile. While he expects AI systems to reach human level performance on many tasks within a few years, he sees political institutions still debating basic questions about transparency, liability, and control. That gap between capability and governance is what drives his insistence on treating AI safety as an existential issue, one that demands attention from presidents, popes, and platform owners alike, even if the precise channels of communication are not always visible in the public record.

More from MorningOverview