
Bill Gates is sharpening his warning about artificial intelligence, arguing that the same tools transforming medicine and software could also help bad actors build biological weapons on the scale of a global pandemic. He now frames AI-enabled bioterror as a risk that rivals, and in some ways exceeds, the devastation of Covid, unless governments and tech companies move faster to contain it.
Rather than treating this as a distant sci‑fi scenario, Gates is pressing policymakers to see it as an urgent security problem that intersects with public health, software, and law enforcement. His message is blunt: the world squandered the chance to blunt Covid, and Today it has even less excuse to be unprepared for an engineered outbreak powered by AI.
From pandemic failure to AI‑driven bioterror risk
Gates has been unusually explicit that the world’s botched response to Covid is the backdrop for his concern about AI misuse. He has argued that if countries had invested earlier in surveillance, vaccines, and basic preparedness, “the amount of human suffering would have been dramatically less,” a failure he now uses as a cautionary tale when he describes how Today’s AI systems could help design or optimize pathogens that spread faster or evade existing treatments, a possibility he has linked directly to the threat of a future bioterror disaster in his recent AI warnings.
In that context, he has started to describe AI misuse as an “even greater risk than a pandemic” because it could allow a small group, or even a lone actor, to design a bioterrorism weapon that would previously have required the resources of a state. Gates has stressed that Today’s large models can already accelerate biological research, and he has cautioned that the same capabilities that promise breakthroughs in drug discovery could be redirected by “bad actors” to engineer more lethal or more transmissible agents, a concern he has underlined in separate comments about the potential for AI to be used as a bioterrorism tool comparable in impact to Covid in his more detailed comparisons to Covid.
Why Gates sees bioterror as the “next big threat”
Gates has been talking about pandemics for years, but he now frames bioterrorism as the “next big threat facing humanity,” a shift that reflects both technological change and geopolitical anxiety. In earlier remarks he argued that deliberate biological attacks could be more destabilizing than natural outbreaks, and he has pointed out that Interpol shares this assessment, with the international police organization warning that extremist groups and criminal networks could exploit advances in synthetic biology, a convergence he highlighted when he discussed bioterror risks with Interpol officials.
He has also reminded audiences that he used a high‑profile TED Talk in 2015 to warn that the world was not ready for a major viral outbreak, a prediction that Covid grimly validated, and he now says the danger from bioterror is “more immediate” because AI is lowering the barrier to entry for sophisticated biological work. In that earlier TED Talk, Gates urged governments to invest in health systems and rapid response capacity, and he has since argued that the same kind of investment is needed to protect against AI‑enabled biological threats, a point he has reiterated in conversations about how global health spending should be reoriented to confront what he calls the next big threat in interviews cited by Euronews coverage.
How AI could supercharge non‑state bioterror groups
The most unsettling part of Gates’s warning is his focus on non‑state actors, not just rogue governments. He has said explicitly that AI could help “non‑state groups create bioterrorism weapons using open source software,” a scenario in which freely available models and code give small organizations capabilities that once required national laboratories, a concern he raised when describing how Microsoft co‑founder Bill Gates sees AI tools intersecting with biology in open source ecosystems.
He has linked that risk to the broader spread of powerful coding and modeling tools, noting that AI is already transforming software development and that similar accelerants are emerging in computational biology, which could allow extremist groups to run simulations, optimize genetic sequences, or identify vulnerabilities in public health systems. In one detailed account of his remarks, Gates is quoted warning that these non‑state groups could exploit AI in areas like software development to build or deploy biological weapons, a scenario that has been summarized in reporting that describes how Bill Gates believes AI could be used to develop bioterrorism weapons and that was later edited by Abhinav.
“Optimism with footnotes”: real risks, real oversight
Despite the stark language, Gates insists he is not a pessimist about AI, instead describing his outlook as “optimism with footnotes,” a phrase he used in his yearly message to capture the idea that AI could transform healthcare and education while also creating new security nightmares. In that message he stressed that if we had prepared properly for Covid the outcome would have been far better, and he argued that Today the world must pair enthusiasm for AI’s benefits with serious oversight of how models are trained, deployed, and accessed, a balance he urged when he warned that AI’s potential role in bioterrorism already appears to be emerging in his annual letter.
On his own site he has argued that the risks of AI are “real but manageable,” and he has called for the software security industry to expand the work it is already doing to monitor and harden systems, saying this “ought to be a top concern” as AI models accelerate both defensive and offensive cyber capabilities, a point he elaborates in a detailed essay on AI risks.
“No upper limit”: why timing matters for regulation
Gates’s urgency is tied to his belief that AI will not plateau before surpassing human intelligence, a view he summarized by saying “there is no upper limit” to how capable these systems could become. He has warned that as models race past human levels in specialized tasks, the window to put guardrails in place will narrow, a concern he voiced when he discussed how Bill Gates expects AI to keep improving without hitting a ceiling in comments reported by Windows Central.
That is why he keeps returning to the idea that the “real AI risks will hit sooner than most people expect,” including job losses and social disruption, but with a particular emphasis on the risk of bioterrorism, which he has described as a threat that could materialize quickly if bad actors use AI to design a bioterrorism weapon, a scenario he has outlined in interviews about how Gates sees the timeline for AI dangers in near‑term risks.
More from Morning Overview