Geoffrey Hinton, the British-Canadian computer scientist widely known as the “Godfather of AI,” has raised his estimate of the probability that artificial intelligence could wipe out humanity within the next three decades. His warning arrives at a moment when the U.S. government has reversed course on AI safety regulation, revoking the federal framework that had required rigorous testing of advanced systems. The collision of escalating risk warnings and loosening oversight creates a volatile gap that neither researchers nor policymakers have fully addressed.
Hinton Raises the Alarm on Superintelligence
Hinton has spent decades at the frontier of neural network research, and his views on AI risk carry unusual weight across both academia and industry. In a late December 2024 interview, he increased his estimate of the likelihood that highly advanced AI could pose an existential threat to humans within 30 years. His earlier assessment had placed the risk at roughly 10 percent. He now puts it between 10 and 20 percent, a shift he described as reflecting the accelerating pace of AI capability gains.
In that interview, reported by The Guardian, Hinton emphasized that his rising concern is grounded in recent technical progress rather than abstract speculation. Systems that only a few years ago struggled with basic language tasks can now generate complex code, pass professional exams, and exhibit early signs of strategic reasoning. For a researcher who helped pioneer deep learning, the speed of that advance is itself a warning signal.
The analogy he used was blunt. Hinton said humans will be like toddlers compared with the intelligence of highly powerful AI systems. That image is not rhetorical decoration. It captures a specific concern: that once AI exceeds human cognitive ability across a broad range of tasks, the power imbalance could become unmanageable. A toddler does not negotiate with an adult on equal terms, and Hinton’s point is that superintelligent machines would hold a similar advantage over people.
What makes Hinton’s revised estimate notable is not the precise number but the direction. A researcher who helped build the foundations of modern AI is telling the public that the danger is growing faster than he previously believed. He is not predicting a fixed date for catastrophe. Instead, he is saying the window of time in which humans can maintain control is narrowing, and the uncertainty itself is part of the problem. When the possible downside is extinction, even a 10 to 20 percent probability is extraordinarily high.
U.S. Safety Framework Built and Then Dismantled
The federal government’s response to AI risk has swung sharply in a short period. On October 30, 2023, Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, established the most detailed set of AI safety requirements the U.S. had ever issued. The order required developers of the most powerful AI models to share safety test results with the federal government before releasing those systems. It directed agencies to set standards for red-teaming, the practice of stress-testing AI for dangerous capabilities, and it created reporting obligations for companies working on frontier models.
EO 14110 also tasked multiple agencies with developing technical standards, watermarking methods to identify AI-generated content, and guidelines to reduce risks to critical infrastructure and national security. While many of its provisions were still being implemented, the order marked a clear federal stance: cutting-edge AI should not be deployed without robust evaluation for potential harms.
That framework lasted roughly 15 months. On January 20, 2025, a new executive order titled “Removing Barriers to American Leadership in Artificial Intelligence” revoked EO 14110 in its entirety. The replacement order frames the prior safety requirements as obstacles to U.S. competitiveness and directs the development of a new AI action plan, though it does not specify what safety provisions, if any, that plan will include.
The practical result is a regulatory vacuum. The testing mandates, the reporting requirements, and the agency-level safety standards that EO 14110 created are no longer in force. No replacement rules have been published. Companies developing advanced AI systems are, for now, operating without the federal oversight structure that existed just weeks ago. The shift does not prevent firms from conducting their own evaluations, but it removes the legal obligation to meet minimum safety benchmarks before releasing powerful models to the public or integrating them into sensitive applications.
Why the Policy Gap Matters for Risk
Hinton’s warnings and the regulatory rollback are not separate stories. They intersect at a specific point: the question of who, if anyone, is checking whether the most powerful AI systems are safe before they reach the public. Under EO 14110, the answer was at least partially the federal government, which could demand test results and set expectations for red-teaming. Under the current arrangement, that responsibility falls almost entirely on the companies building those systems.
This is not a theoretical concern. The AI industry is engaged in an intense race to develop more capable models, with billions of dollars in investment flowing to companies that can demonstrate the fastest progress. Safety testing takes time and resources. Without a regulatory mandate, the incentive to prioritize speed over caution grows stronger. That dynamic is precisely what Hinton has warned about: a situation where competitive pressure outpaces the ability to verify that new systems are safe.
The new executive order’s call for a fresh action plan could eventually produce a replacement framework. But the gap between revoking the old rules and finalizing new ones creates a period of reduced accountability. During that window, AI capabilities will continue to advance. If Hinton is right that the risk is growing, then the absence of mandatory safety checks during a period of rapid development represents a significant blind spot. Even if future regulations restore some form of oversight, damage from inadequately tested systems deployed in the interim could be difficult or impossible to reverse.
The Toddler Problem and What It Means
Hinton’s comparison of humans to toddlers deserves closer examination because it challenges a common assumption in AI policy debates. Much of the current discussion treats AI risk as a problem of alignment, the technical challenge of ensuring that AI systems pursue goals that humans actually want. Hinton’s framing suggests something more fundamental: that the gap in raw intelligence between humans and advanced AI could become so large that alignment techniques may not be sufficient.
Consider the analogy from the other direction. A toddler cannot meaningfully constrain the behavior of an adult, no matter how well-intentioned the adult may be. The power differential is too great. If superintelligent AI systems reach a point where they are as far beyond human intelligence as adults are beyond toddlers, then the tools humans use to control AI, including safety testing, alignment research, and regulatory oversight, may simply not be adequate to the task. A system that can outthink its designers at every turn may find ways around constraints that looked robust on paper.
This does not mean the situation is hopeless. It means that the timeline for developing effective safeguards is shorter than many policymakers appear to assume. Hinton is not saying robots will take over next year, or that catastrophe is inevitable. He is arguing that when a technology could plausibly lead to human extinction, a double-digit probability over a few decades should trigger an emergency-level response.
That response has at least three components. First, governments need sustained technical capacity to evaluate cutting-edge models independently of the companies that build them. Second, policymakers must design rules that are flexible enough to adapt to rapid advances but firm enough to prevent a race to the bottom on safety. Third, researchers and industry leaders have to treat existential risk as a central design constraint, not an afterthought to be patched in after deployment.
Hinton’s raised alarm, combined with the dismantling of the previous federal safety framework, highlights a widening gap between the pace of AI development and the mechanisms meant to keep it safe. Whether that gap narrows or continues to grow will help determine whether advanced AI becomes a transformative tool that remains under human control, or a technology that outstrips our ability to manage it.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.