Image Credit: European Commission – Photographer: Lukasz Kobus – CC BY 4.0/Wiki Commons

Bill Gates is once again trying to shock policymakers out of complacency, this time by arguing that artificial intelligence could turn into the next COVID-scale catastrophe if it supercharges bioterrorism. He is not predicting a specific attack, but he is warning that open access to powerful AI models could let small groups design and deploy biological weapons with a speed and precision that public health systems are not ready to match.

Instead of treating AI as a distant sci‑fi risk, Gates is framing it as a near-term security challenge that intersects directly with the hard lessons of the pandemic. He is urging governments to treat AI-enabled biothreats with the same seriousness that nuclear proliferation eventually received, while still harnessing the technology’s potential to transform medicine, education and the broader economy.

From pandemic prophet to AI Cassandra

When Bill Gates talks about global health threats, he is drawing on a track record that predates the current AI boom. In his widely cited 2015 TED talk, Gates argued that the world was not prepared for a major infectious disease outbreak and later said that if that warning had been heeded, COVID‑19 might have unfolded very differently, a point he has reiterated while discussing how his earlier TED warning was largely ignored. That history is central to how he now frames AI: not as an abstract philosophical puzzle, but as a concrete accelerant for the same kinds of biological risks he has been funding and studying for years.

Gates has long used his role as a billionaire philanthropist to push governments toward better pandemic preparedness and to highlight the danger of engineered pathogens, and he is now extending that advocacy to the digital tools that could make such engineering easier. In recent comments he has argued that the world faces a new class of threat in which non-state actors, rather than governments, could use widely available AI systems to design biological agents, a concern he has tied explicitly to the possibility that AI could be used as a. For Gates, the lesson of COVID‑19 is not only that pandemics are devastating, but that ignoring early warnings about systemic risk is itself a form of negligence.

How AI lowers the bar for bioterror

The core of Gates’s alarm is that artificial intelligence compresses the expertise and time needed to design a biological weapon. He has written that today an even greater risk than a naturally caused pandemic is that a non-government group will use open source AI tools to design and deploy a bioterrorism weapon. In his view, the combination of powerful models and open-source code means that groups without state backing could generate genetic sequences, lab protocols and dissemination strategies that once required large, well-funded programs.

Gates has been explicit that he is worried about non-state actors using open-source software to develop bioterrorism weapons, warning that AI could help such groups move from intent to capability far more quickly than intelligence agencies and health systems can respond. He has described how open tools could guide users through the steps of designing or modifying pathogens, a scenario he has linked to the risk that AI could be by non-government groups. In that framing, AI is not the weapon itself, but the expert assistant that makes weaponization more accessible.

The open-source dilemma

One of the most contentious parts of Gates’s argument is his focus on open-source AI. While he has praised the transformative potential of AI in healthcare and education, he has also issued a stark warning that open access to the most capable models could enable bioterrorism threats. He has pointed to the risk that freely available systems could be fine-tuned or combined with specialized biological data to generate dangerous outputs, a concern he has tied to the way open source AI could enable misuse.

Gates is not calling for a blanket ban on open models, but he is arguing that the current enthusiasm for openness has outpaced serious thinking about security. He has urged stronger global action to manage AI risks, including tighter controls on the most capable systems and better monitoring of how they are used, a position he has linked to his broader fear that AI could be used to design. In his view, the debate over open versus closed AI cannot be separated from the question of how easy it should be for small groups to access tools that can meaningfully assist in building biological agents.

Balancing AI’s promise with existential risk

Even as he raises alarms, Gates continues to describe AI as the most powerful technology shift in decades, arguing that it will change society more than any recent innovation. He has written that AI will reshape labor markets and that traditional economic transitions may not be enough, suggesting that governments will need solutions beyond the free market to manage the disruption, a point he has made while noting that AI will change the most. That dual message, optimism about productivity and services alongside concern about security, is central to how he wants policymakers to think about the technology.

Gates has argued that in a mathematical sense, the world should be able to allocate AI’s new capabilities in ways that benefit everyone, but he has also warned that we are already seeing how bad actors can exploit them. He has written that AI could be used by such actors to plan cyberattacks or to design biological threats, a risk he has linked to his broader claim that AI could help pursue bioterrorism. For Gates, the challenge is to build guardrails and governance that keep AI’s upside intact while making it far harder to use the same tools to create the next pandemic-scale disaster.

What Gates wants governments to do next

Gates has been clear that he sees the risks of artificial intelligence as real but manageable if governments move quickly. He has called for the software security industry to expand the work it is already doing on AI safety and has argued that monitoring and auditing of powerful systems should become a top concern for the technology sector, a point he has made while urging stronger software security efforts. He has also floated the idea that international oversight of AI might eventually resemble the role played by the International Atomic Energy Agency in monitoring nuclear materials.

In his annual note, Gates has argued that AI could be used to design bioterrorism weapons but has also stressed that the same technology can dramatically improve disease surveillance, vaccine development and health system planning if deployed with foresight and care for others. He has pointed to the need for global coordination on standards and emergency response, linking that agenda to his warning that AI could be to design dangerous agents. In his view, the choice is not between embracing AI or rejecting it, but between building serious governance now or waiting until a catastrophic misuse forces the world to catch up.

Gates has also repeated his argument that the risks are manageable if the right institutions are built, returning to the theme in his writing that AI’s dangers can be contained with robust oversight and technical safeguards. He has urged that governments treat AI safety as a continuous process rather than a one-off regulation, emphasizing that the risks of AI are real but manageable. For Gates, the real failure would not be that AI exists, but that leaders once again ignore a clear warning about how a new technology could turn into the next global health emergency.

More from Morning Overview