Pixabay/Pexels

Humanity has always lived with risk, but for most of history those dangers were local, temporary, and survivable. What has changed is that our technology, our numbers, and our impact on the planet now create plausible paths to a permanent end for our species. When scientists talk about how humanity could end, they are not trading in science fiction so much as mapping the boundary between our current trajectory and the point where recovery becomes impossible.

Looking across physics, biology, climate science, and ethics, a picture emerges of several distinct ways our story could stop. Some are slow burns, like environmental collapse, others are sudden shocks, like nuclear war or a large asteroid impact, and a few are entirely new, such as artificial intelligence or engineered pandemics. I want to trace what the best available research says about these scenarios, how likely they might be, and what it would take to avoid turning theoretical risks into our final chapter.

From catastrophe to extinction: what scientists actually mean

When researchers talk about human extinction, they are describing the complete end of the species Human, not just a sharp drop in population or the collapse of current institutions. In the technical literature, extinction or omnicide is the point at which no members of our species remain alive anywhere, and no future generations are possible. That is a much higher bar than even the worst historical disasters, which is why the study of human extinction focuses on scenarios that could either kill everyone directly or permanently destroy the conditions needed for survival.

Academic work on these questions has accelerated, with one major Abstract noting that serious Research into extinction drivers now spans historically familiar threats and entirely unprecedented ones. Scholars distinguish between global catastrophes that kill a large share of people but leave recovery possible, and existential risks that either wipe us out or irreversibly curtail our long-term potential. That distinction matters, because it shifts the focus from tallying immediate casualties to asking whether a shock leaves any path back to a thriving civilization.

Mapping the menu of global catastrophe scenarios

To understand how humanity could end, I find it useful to start with a structured list of dangers. One influential taxonomy of global catastrophe scenarios divides threats into Contents such as Anthropogenic risks created by human activity and natural risks that arise from the cosmos or Earth itself. Within the human-made category, it highlights sections labeled 1.1, 1.2, 1.3, and 1.4, covering Artificial intelligence, Biotechnology, Chemical weapons, and the Choice to have fewer children, alongside pollution and climate disruption that could undermine the foundations of civilization.

Another overview of global catastrophic risk stresses that Defining these dangers starts with history. Humanity has already endured pandemics, wars, and famines that killed a significant fraction of the population, and Some of those events came close to reshaping the trajectory of civilization. Yet the same analysis points out that new factors, including climate change, ecosystem collapse, and non-sustainable agriculture, now interact with nuclear weapons, advanced AI, and engineered pathogens in ways that could push us beyond any previous boundary.

Nuclear fire, engineered plagues, and the age of AI

Among the anthropogenic threats, three stand out in the scientific literature as especially plausible routes to an early end: nuclear war, biotechnology, and artificial intelligence. Analysts who rank the ways the world could end often put large-scale nuclear conflict near the top, noting that a full exchange between major powers could kill billions directly and trigger a nuclear winter that devastates agriculture worldwide. One detailed rundown of existential threats lists 1) Nuclear war as a leading concern, and points out that the danger is not only deliberate launches but also false alarms and miscalculations inside complex command systems.

Biotechnology and AI are newer but, in some ways, more unsettling. The same global risk mapping that catalogs nuclear weapons also flags advanced Biotechnology as a route to engineered pandemics that could be more contagious and lethal than anything seen naturally, and Artificial intelligence as a system that might eventually escape human control. In a widely discussed warning about the Rise of the machines, Stephen Hawking argued that if AI systems reach or surpass human-level intelligence, they could become difficult to align with our values and might pursue goals that are indifferent or hostile to human survival. That concern is no longer confined to theorists: On December 27, 2024, AI pioneer Geoffrey Hinton publicly estimated a 10–20 percent, with a median of 15 percent, probability of AI-caused extinction in the next 30 years and a similar risk of AI-caused extinction in the next 150 years, a figure that underlines how seriously some experts now take this possibility.

Asteroids, supervolcanoes, and the hostile cosmos

Even if we managed our technology perfectly, the universe itself is not a safe backdrop. Astronomers have long known that large space rocks can reset the planet, and the fossil record suggests that an Asteroid Impact of the kind that helped wipe out the dinosaurs could, in principle, do the same to us. One survey of Natural Disasters notes that Asteroid Impact has become a cliché of Hollywood disaster movies, yet the underlying physics is unforgiving: Once a rock of sufficient size hits, the resulting firestorms, tsunamis, and dust clouds could kill most complex life, potentially sparing only the hardiest forms of life, even cockroaches.

Asteroids are not the only external threat. Work on How life on Earth will end highlights Asteroid strikes, nearby supernova blasts, and gamma-ray bursts as potential triggers for mass extinction, along with more gradual processes such as the loss of atmospheric oxygen that could eventually wipe out life. A discussion among space enthusiasts on Jan 4, 2019, weighed the odds of an Asteroid impact against other risks and noted that while we now have nuclear ordnance and emerging technology for spaceborne interceptors, our detection and deflection capabilities are still incomplete. The consensus in that debate, and in the scientific literature, is that while such cosmic events are rare on human timescales, they are inevitable on geological ones, which means that if we survive long enough, we will eventually have to deal with them.

Climate, ecosystems, and the slow unravelling of civilization

Not every path to the end of humanity looks like a single dramatic blast. Several analyses of So the most likely ways the world could end emphasize that climate change, biodiversity loss, and pollution are more likely to erode the foundations of society than to kill everyone outright. Yet that erosion can still be existential if it triggers feedback loops that make large parts of the planet uninhabitable, collapses food systems, or sparks conflicts that interact with other technologies. One review of existential threats notes that while nuclear war and pandemics are more obvious candidates for sudden extinction, runaway warming and ecosystem collapse could be a slower but equally final route if they permanently reduce the planet’s carrying capacity below the level needed to sustain any surviving communities.

The same catalog of Other global catastrophic risks lists climate change, environmental degradation, and non-sustainable agriculture alongside nuclear weapons and pandemics, underscoring that the line between “natural” and “human-made” is blurred when our emissions and land use reshape the entire Earth system. A separate overview of pollution crisis warns that the accumulation of contaminants in air, water, and soil is already exceeding safe limits and poses a “danger for the human civilization.” In that framing, the end of humanity might not arrive as a single headline event but as a series of compounding stresses that eventually leave no viable refuges.

The long game: what physics says about Earth’s ultimate fate

Even if we somehow navigated every human-made and near-term natural risk, the planet itself has an expiration date set by stellar physics. Work on the Future of Earth describes how, as the Sun brightens over hundreds of millions of years, oceans will evaporate, plate tectonics will slow, and eventually the planet will likely be engulfed or at least scorched as the Sun expands into a red giant and pushes its outer layers beyond the planet’s current orbit. Long before that final engulfment, rising solar luminosity will make the surface uninhabitable for complex life, so any surviving humans would have to leave or retreat to artificial habitats.

Planetary scientists have tried to put numbers on this timeline. Astrophysicist Ravi Kopparapu notes that “Earth has probably 4.5 billion years before the sun becomes a large red giant and then engulfs the Earth,” a figure that sets an upper bound on how long our species could remain on this planet even in the best case. That is an unimaginably long horizon compared with the next century, but it also makes clear that if humanity wants to exist on cosmic timescales, it will eventually have to become a spacefaring civilization that can survive beyond this one world.

Could we outlive Earth, the Sun, and even the universe?

Some scientists and philosophers have started to ask what it would take not just to avoid extinction in the near term, but to extend human existence far beyond the lifespan of the Sun. One exploration of how humans might outlive Earth sketches a speculative path in which we first establish off-world settlements, perhaps starting with Mars, then spread to other star systems, and eventually confront the deep future of cosmology, including scenarios like heat death or a final black hole apocalypse. In that vision, the end of humanity would be tied not to a single disaster but to the ultimate fate of the universe itself, unless our descendants find ways to migrate between cosmic phases or exploit exotic physics.

Even in that far-future framing, the near-term choices we make about existential risk matter. If we cannot manage nuclear arsenals, AI systems, and planetary boundaries over the next few centuries, we will never reach the point where questions about the universe’s end become practically relevant. The same long-range analysis that imagines humanity surviving the death of the Sun also emphasizes that our current technological power is a double-edged sword: it gives us the tools to leave Earth, but also the capacity to destroy ourselves long before we need to worry about the final black hole apocalypse.

Putting numbers on the precipice we stand on

Because these scenarios are so consequential, some ethicists have tried to estimate the overall probability that humanity will suffer an existential catastrophe in the relatively near future. Philosopher Peter Singer, writing in a context labeled Quarterly and subtitled The Year Ahead 2026, cites Toby Ord’s claim that the chance of an existential catastrophe in the next hundred years is around 16–17 percent, or roughly one in six. That figure is not a precise forecast, but it is a structured attempt to combine the various risks from nuclear war, engineered pandemics, unaligned AI, and other threats into a single, if sobering, estimate.

In his own work, Toby Ord explicitly estimated the chance of an existential catastrophe that effectively curtails the potential of future generations at that same one-in-six level over the next century, while arguing that the risk from natural threats like asteroids and supervolcanoes is much, much smaller than the risk from human-made technologies. A detailed review of his book The Precipice summarizes his view that Humanity is now on the precipice of extinction because our technological power has grown faster than our wisdom. According to that assessment, the chance of an existential catastrophe in the next century was 1 in 100 from natural causes but 1 in 6 when human-made risks are included, a gap that underscores how much of our fate now lies in our own hands and in the institutions we build to manage these dangers.

More from MorningOverview