Morning Overview

Are we living in a simulation? What science and AI say now

Researchers at the University of British Columbia Okanagan have published a mathematical argument that, they say, rules out the possibility that this universe is a computer simulation. The claim, made by co-author Dr. Lawrence M. Krauss and his colleagues, lands in the middle of a two-decade debate that began with a philosopher’s thought experiment and now intersects with rapid advances in artificial intelligence. As AI systems grow capable of building virtual worlds with less and less human guidance, the question of whether our own reality could be one of those worlds has shifted from science fiction to a live research problem.

That debate now spans philosophy, physics, computer science and even the sociology of scientific publishing. It ranges from abstract arguments about probability to concrete proposals for experimental tests, and from speculative worries about omnipotent simulators to practical questions about how AI might soon generate immersive environments that are indistinguishable from everyday life. The UBC Okanagan work does not end this discussion, but it sharpens a central fault line, whether the basic structure of our universe is compatible with the kind of algorithmic processes that underlie all known computers.

Bostrom’s Trilemma and Its Statistical Pushback

The modern simulation debate traces back to a 2003 paper in The Philosophical Quarterly by Nick Bostrom, who used probabilistic and anthropic reasoning to frame a three-pronged dilemma. Either civilizations go extinct before reaching the computational power to simulate conscious beings, or advanced civilizations choose not to run such simulations, or the fraction of all conscious beings living inside simulations is so large that we are almost certainly among them. That trilemma has anchored nearly every subsequent scientific and philosophical treatment of the idea, in part because it ties together technological forecasting, ethics and metaphysics in a single compact argument.

Yet the trilemma’s logic depends on assumptions that later researchers have challenged directly. A paper in the journal Universe applied Bayesian model averaging to the simulation argument and found that, once model uncertainty is treated carefully, the probability of living in a simulation tends toward less than 50 percent under broad conditions. That result cuts against the popular reading of Bostrom’s work, which is often summarized in media as near-certainty that we are simulated. The Bayesian analysis suggests the original argument is far more sensitive to its starting assumptions than casual readers tend to recognize, especially assumptions about how many simulations advanced civilizations would run and how similar those simulations would be to their own histories.

Physics Sets Hard Limits on Simulated Realities

Even if a civilization wanted to simulate an entire universe, the raw physics of computation would constrain the effort. A widely cited analysis in Physical Review Letters estimated the total information storage and number of operations the universe could have performed over its entire cosmic history, using order-of-magnitude bounds based on fundamental constants. Those figures represent the ceiling for any computation happening inside this universe, meaning a simulation of comparable complexity would require resources at least as large as the thing being simulated. That is a practical barrier, not just a theoretical one, and it applies regardless of how clever the simulator’s algorithms might be or how advanced their hardware becomes.

Physicists have also looked for observable fingerprints that a simulation might leave behind. A preprint on lattice spacetime examined what would happen if spacetime were modeled on a discrete grid, the way a computer breaks continuous space into pixels. The authors predicted that such a lattice would break rotational symmetry at the highest-energy cosmic rays and derived bounds on an effective lattice spacing scale. So far, cosmic ray observations have not revealed the predicted asymmetry, though the test remains limited by the energy range of current detectors. A separate line of work in astrophysics has explored how finely tuned cosmic parameters and large-scale structure formation might constrain any putative simulator, since reproducing those phenomena would demand extreme precision in both initial conditions and ongoing computational updates.

A Mathematical Case Against Simulation

The strongest recent challenge came from the UBC Okanagan team. Their research, reported by Phys.org, drew on mathematical theorems related to algorithmic information theory. Dr. Lawrence M. Krauss stated plainly: “Hence, this universe cannot be a simulation.” He added that the research has profound implications, pointing to what the team described as non-algorithmic understanding embedded in the structure of physical laws. The argument is that certain mathematical truths the universe instantiates cannot, by definition, be reproduced by any algorithmic process, which is what a computer simulation would need to be, because all known digital machines operate by executing finite, rule-based procedures.

That conclusion is not without critics. Philosopher David Chalmers has long argued, in work such as his chapter connecting Matrix-style scenarios to external-world skepticism, that even if we were inside a simulation, the objects around us would still be real in a meaningful sense, because “real” in everyday language tracks causal stability rather than metaphysical fundamentality. A peer-reviewed critique in the same journal that hosted Bostrom’s original paper challenged the indifference principle and observer reasoning on which the trilemma depends, suggesting that our sampling assumptions about observers may be misguided. The debate, in other words, is not settled by a single proof, because the participants disagree about what “simulation” even means, what counts as an algorithm, and what sort of evidence (mathematical, empirical or experiential) could ever definitively rule the hypothesis in or out.

AI Self-Evolution Raises the Stakes

While physicists argue about whether reality can be computed, AI researchers are building systems that compress decades of design work into days. A report in Science magazine described an advance in which artificial intelligence systems improve generation after generation without human input, effectively replicating years of AI research in a fraction of the time. That kind of recursive self-improvement is directly relevant to the simulation question. If machines can design better machines on their own, the computational ceiling for building convincing virtual worlds drops rapidly, because each new generation of AI can optimize both hardware use and software design in ways that human engineers might never consider.

As AI-generated environments grow more sophisticated, they also create new empirical angles on classic philosophical puzzles. Cognitive scientists studying perceptual inference have argued that human brains constantly build internal models of the world and update them based on incoming sensory data, a process that already resembles a kind of on-the-fly simulation. If digital agents begin to inhabit richly detailed virtual spaces designed by other AIs, the line between simulated and non-simulated experience could blur further, at least from the standpoint of what an agent can know. In that scenario, questions about whether our universe is a simulation start to look less like distant metaphysics and more like an extension of ongoing research into how minds, biological or artificial, construct reality from limited information.

Open Infrastructure and the Future of the Debate

The simulation argument has also been shaped by how research itself is shared. Many of the technical discussions about simulated universes, discrete spacetime and algorithmic limits circulate first as preprints on large repositories before they appear in journals. The membership model that supports one of the main physics and mathematics preprint servers relies on universities and laboratories pooling resources to maintain an open platform, while individual researchers can also contribute directly to keep access free. That infrastructure has made it easier for cross-disciplinary work, combining philosophy, cosmology and computer science, to reach wide audiences quickly, accelerating both criticism and refinement of bold claims like the UBC Okanagan result.

As long as that kind of open ecosystem persists, the question of whether our universe is a simulation will likely remain a moving target rather than a closed case. New bounds from high-energy astrophysics, fresh mathematical insights into computation, and rapid progress in AI-generated worlds will continue to reshape what counts as a plausible scenario. For now, the emerging consensus is not that we are definitely simulated or definitely real in some ultimate sense, but that careful attention to physics, probability and information theory can narrow the space of possibilities. Whether or not Krauss and his collaborators have delivered the final word, their work ensures that any future defense of the simulation hypothesis will have to grapple with deep questions about what algorithms can and cannot do, and about whether reality, at its most basic level, behaves like a computation at all.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.