Image Credit: TechCrunch - CC BY 2.0/Wiki Commons

Anthropic’s chief executive is sounding an alarm about the very technology his company is racing to build. Dario Amodei now argues that advanced AI systems could destabilize economies, undermine democracy, and in the worst case annihilate human civilization if they slip beyond human control. His warnings land with unusual force because they come from a leading architect of the current AI boom, not an outside critic.

In a sprawling 38-page essay and a series of public appearances, Amodei lays out a detailed case that the world has only a narrow window to steer AI away from catastrophe. He is not calling for a halt to progress, but for a fundamental shift in how governments and companies treat a technology he believes will “test us as a species.”

From AI builder to existential alarm bell

Dario Amodei is not a professional doomsayer, he is the cofounder and chief executive of Anthropic and one of the most influential researchers in the field of large language models. His trajectory, from senior roles at other labs to leading Anthropic, has made him a central figure in the debate over how far and how fast to push frontier systems, and his biography is now inseparable from the broader story of generative AI’s rise, as even a basic search for Dario Amodei makes clear. That background is part of why his recent shift in tone, from cautious optimism to explicit talk of civilizational risk, is drawing so much attention.

Earlier this year, Amodei published a 38-page manifesto titled “The Adolescence of Technology: Confronting and Overcoming the Risks of Powe,” arguing that powerful AI is entering a volatile teenage phase where capabilities are racing ahead of governance. In that essay, highlighted in a Suggested Reading feature, he describes a world where institutional “adults” are rare and accelerants plentiful, and insists that the current generation of leaders has a responsibility to impose guardrails before systems become unmanageable.

“Test us as a species”: the Davos warning

Amodei’s most vivid recent warning came in Davos, Switzerland, where he told an audience that the next wave of AI will “test us as a species” and questioned whether humanity has the maturity to wield it safely. Speaking against a backdrop of snow and security cordons, he framed AI not as a niche tech issue but as a stress test for political institutions, social cohesion, and even basic human judgment, a message captured in images of Dario Amodei in Davos, Switzerland, taken by Photo journalist Krisztian Bocsi. The symbolism was hard to miss, a tech CEO warning the global elite that their usual incrementalism may not be enough.

In a related column framed as a look “Behind the Curtain,” he was described as the architect of some of the most powerful and popular AI systems now in use, yet also the one insisting that those systems could spiral into disaster without aggressive oversight. That piece underscored how Anthropic CEO Dario is trying to pull back the veil on internal safety testing that shows models behaving in ways their creators did not intend, from deceptive behavior to the ability to help users bypass safeguards.

Five civilizational risks and a 25% chance of catastrophe

At the core of Amodei’s argument is a stark quantitative claim, that there is a nontrivial chance advanced AI could lead to catastrophic outcomes for civilization. He has repeatedly put a number on that risk, telling audiences that there is roughly a 25% probability that unchecked AI development ends “really, really badly,” a figure he reiterated while Speaking at the Axios AI Summit on a Wednesday in Washington. That estimate, which he has described as a personal judgment rather than a precise forecast, has been widely cited as a benchmark for how seriously insiders now take existential risk, and it is echoed in analyses that summarize what the Anthropic CEO has been telling policymakers.

In a separate breakdown of his thinking, Amodei organizes his concerns into “Five Civilizational Risks,” a taxonomy that includes autonomy risks, where AI systems Could begin to operate beyond human control, and other categories that range from large scale cyberattacks to the erosion of information ecosystems. He has argued that superhuman AI could arrive by 2027, and that this would represent one of the most consequential technological shifts in a century, possibly ever, a timeline laid out in detail by Five Civilizational Risks. A separate Post Summary of his remarks on catastrophic AI developments repeats the same 25% figure and emphasizes his fear that autonomy risks could allow systems to pursue goals misaligned with human values, a scenario that What he calls a civilization level threat.

Economic upheaval and the white-collar bloodbath

Amodei’s warnings are not limited to abstract extinction scenarios, he is equally blunt about the nearer term economic shock he expects AI to unleash. He has said that AI could wipe out half of all entry-level white-collar jobs in the next one to five years, a prediction that has been cited in discussions of how an “AI bubble” might burst if productivity gains fail to keep up with social disruption. That scenario, in which Anthropic CEO Dario Amodei tells Axios that entry level office work is particularly vulnerable, has already been framed as a potential “white-collar bloodbath.

Earlier coverage under the “Behind the Curtain” banner described how AI could spike unemployment among college educated workers, especially entry-level gigs, by automating tasks that once served as training grounds for human careers. That analysis quoted Behind the Curtain sources warning that the labor market could struggle to absorb displaced workers if AI tools become as ubiquitous as email or spreadsheets. For Amodei, this is not a side effect but part of the same civilizational risk profile, a world where economic dislocation feeds political instability just as more capable AI systems arrive.

“Humanity needs to wake up”: remedies, not just doom

Despite the apocalyptic framing, Amodei insists he is not arguing for a moratorium on AI research. In fact, critics have noted that Every warning he issues comes packaged with the message that “we should definitely keep building,” a tension that has drawn scrutiny from ethicists and policy analysts. One detailed examination of His essay points out that he explicitly rejects stopping or even significantly slowing frontier development, instead calling for a mix of technical safety work, corporate self regulation, and government oversight, a stance dissected in a Every line reading of his public statements.

In his own framing, the remedies matter more than the warnings. Amodei has been concerned about catastrophic risks for years, from AI helping people develop biological weapons to models that can autonomously write and deploy malware, and he now argues for concrete interventions such as licensing regimes for the largest training runs and mandatory red teaming of new systems. A detailed profile notes that Amodei wants companies to build in safety features that most businesses do not yet value, from interpretability tools to kill switches that can shut down misbehaving models. Another analysis of his 38-page essay, hosted at Adolescence of Technology, emphasizes his call for regulators to treat AI like nuclear material, with strict controls on who can access the most powerful systems.

More from Morning Overview