A physicist has mounted one of the most detailed technical challenges yet against the idea that our universe is a computer simulation, arguing that the energy required to run such a program would violate known physical laws. The claim arrives at a moment when rapid advances in artificial intelligence have made simulated realities feel less like science fiction and more like an engineering problem. But the math, at least for now, points firmly in the other direction.
A Physics Preprint Takes Aim at the Simulation Idea
The simulation hypothesis has lingered at the boundary of philosophy and physics for more than two decades, ever since philosopher Nick Bostrom framed it as a statistical argument in 2003. If a sufficiently advanced civilization could simulate conscious beings, and if many such civilizations existed, then the odds favor us being among the simulated rather than the simulators. The argument is elegant, but it sidesteps a practical question: could any civilization actually pull it off? That is precisely the question a new preprint by F. Vazza sets out to answer, and the conclusion is blunt. The paper, titled “Astrophysical constraints on the simulation hypothesis for this Universe: why it is (nearly) impossible that we live in a simulation,” applies hard physical limits to the computing power any simulator would need.
Vazza’s approach starts from a well-established link between information processing and energy expenditure. Every computation, no matter how efficient, requires a minimum amount of energy dictated by thermodynamics. The preprint then scales that requirement up to the task of simulating reality at various levels of fidelity. Whether you try to replicate the entire visible universe, restrict the simulation to Earth alone, or even run a low-resolution version of our planet, the energy demands remain incompatible with known astrophysical constraints. In plain terms, the power bill for a simulated cosmos would exceed anything physics allows, no matter how clever the engineers.
Three Scenarios, One Verdict
What makes the preprint more than a thought experiment is its willingness to test weaker versions of the hypothesis. A common rebuttal to energy objections is that a simulator would not need to render the full universe. Perhaps it only generates what conscious observers actually look at, much like a video game that loads scenery on demand. Vazza addresses this by modeling three distinct scenarios: a simulation of the visible universe, a simulation limited to Earth, and a low-resolution Earth simulation. Each scenario is measured against the information–energy relationship and the physical resources that could plausibly exist in any parent universe governed by similar physics, tying abstract philosophy back to concrete astrophysical limits.
Even the most forgiving scenario, a coarse-grained Earth, fails the test. The computational overhead of tracking particle interactions, thermodynamic processes, and quantum states at any meaningful resolution outstrips available energy budgets by orders of magnitude. This is not a matter of needing a bigger server farm. The constraint is fundamental: the Landauer limit ties every bit flip to a minimum energy cost, and when you multiply that cost across the information content of even a simplified Earth, the numbers become absurd. For anyone who has watched AI models grow more efficient with each generation and assumed that trend could extend indefinitely, this finding is a useful corrective. Efficiency gains in software do not override the floor set by physics.
Why AI Progress Does Not Change the Equation
It is tempting to see modern AI, particularly large generative models, as evidence that reality can be faked on the cheap. Systems that generate fluent text, photorealistic images, and increasingly coherent video all run on hardware that is microscopic compared with the cosmos they mimic. From that vantage point, the simulation hypothesis can look less like metaphysics and more like an extrapolation: if we can produce convincing virtual worlds with today’s tools, imagine what a civilization millions of years ahead of us could do. Yet this line of thought glosses over the difference between producing appearances and sustaining a universe. A language model only needs to generate locally plausible continuations; it does not have to ensure that every sentence is consistent with a detailed, evolving physical world.
Simulating a universe, by contrast, means enforcing causal coherence everywhere, at all times. If you heat a pot of water in such a simulation, the underlying rules must guarantee that every molecule behaves in a way compatible with thermodynamics, quantum mechanics, and relativity, not just that the steam looks right from a distance. Shortcuts like generative models work precisely because they ignore most of that hidden structure, learning correlations in the data rather than the full machinery that produced it. Vazza’s argument leans on this distinction: no matter how clever the algorithms, a faithful simulation of our universe’s physics must still manipulate an enormous amount of information, and each manipulation carries an irreducible energy cost. AI progress may change how efficiently we use available compute, but it does not erase the thermodynamic bookkeeping that underpins every bit operation.
What the Preprint Does Not Rule Out
No single paper settles a question this large, and intellectual honesty demands acknowledging the boundaries of Vazza’s argument. The preprint assumes that the hypothetical parent universe obeys physics broadly similar to our own, including comparable thermodynamic constraints and limits on usable energy. If the simulators operate under entirely different physical laws, with access to mechanisms that circumvent or radically alter those limits, then the energy argument loses its force. This is not a flaw in the analysis so much as a scope limitation: scientific reasoning can only evaluate claims against the framework of known or at least specifiable physics. A hypothesis that retreats into unknowable laws becomes effectively unfalsifiable, and unfalsifiable claims sit outside the reach of empirical investigation.
There is also a subtler conceptual issue that most coverage of the simulation hypothesis overlooks. The debate tends to treat “simulation” as a binary: either we live in one or we do not. In practice, the concept spans a wide spectrum, from a full particle-level replica of the cosmos to a selective illusion that only renders what observers perceive. Vazza’s preprint addresses several points on that spectrum, which strengthens its contribution by showing that both maximal and heavily pruned versions run into similar resource ceilings. Still, the most exotic variants of the idea—those proposing that only conscious experiences are simulated, with no underlying physical substrate at all—fall outside the paper’s framework. Such views are harder to test precisely because they make fewer concrete predictions about energy, matter, or observable structure.
Where the Debate Goes From Here
The simulation hypothesis occupies an unusual position in intellectual life. It is taken seriously by some physicists and technologists, dismissed as untestable by others, and treated as a cultural meme by the general public. Vazza’s work does not make the idea go away, but it does narrow the space of plausible versions. If you want to argue that we live in a simulation, you now need to explain where the energy comes from, or you need to abandon the assumption that the parent universe resembles ours in its basic physical constraints. That is a higher bar than many popular treatments acknowledge, and it pushes the discussion away from casual speculation and toward explicit, quantitative models that can be compared with astrophysical data.
In that sense, the preprint is less a final verdict than a challenge to refine the question. It suggests that any scientifically meaningful version of the simulation hypothesis must grapple with thermodynamics, information theory, and cosmology, not just probability puzzles about future civilizations. For now, the numbers favor the view that our universe is not the output of someone else’s computer, at least not one governed by anything like our physics. Whether future work will uncover loopholes in these constraints, or whether the hypothesis will gradually migrate back from physics to philosophy, remains open. But by tying a speculative idea to hard limits, the new analysis marks a step toward treating simulated worlds not just as fodder for science fiction, but as a proposition that can be argued with equations—and, at least for the moment, decisively argued against.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.