veda.ng
Essays/Are We in a Computer Simulation?

Are We in a Computer Simulation?

A structured examination of the simulation hypothesis: Bostrom's trilemma, the computational constraints, the consciousness dependency, and what modern physics and AI research reveal about the argument's strengths and weaknesses.

Vedang Vatsa·February 12, 2026·12 min read
Infographic

The Structure of the Argument

The simulation hypothesis is not science fiction. It is a formal argument, first articulated by Oxford philosopher Nick Bostrom in his 2003 paper "Are You Living in a Computer Simulation?", published in Philosophical Quarterly. The argument takes the form of a trilemma: at least one of three propositions is almost certainly true.

Prop. 1
Civilizations go extinct before reaching simulation capability
Bostrom (2003)
Prop. 2
Advanced civilizations choose not to run ancestor simulations
Bostrom (2003)
Prop. 3
We are almost certainly living in a simulation
Bostrom (2003)

The logic is probabilistic. If a civilization reaches the technological capacity to run high-fidelity simulations of its own evolutionary history, and if it chooses to do so, it can presumably run many such simulations. Each simulation may contain billions of conscious observers. If this process occurs even once, the total number of simulated conscious beings vastly exceeds the number of beings in "base reality." By the anthropic principle (if you are a randomly selected conscious observer), the probability of being a simulated entity is significantly higher than the probability of being a "real" one, provided propositions 1 and 2 are false.

The argument does not claim we are in a simulation. It claims that one of the three propositions must be true. The strength of this formulation is its logical structure. The weakness is that it rests on several assumptions, each carrying substantial uncertainty, that deserve individual examination.

Assumption 1: Consciousness is Substrate-Independent

The entire argument depends on a premise that remains unproven: that consciousness can arise from computation regardless of the physical medium. This is the thesis of substrate independence. If consciousness requires biological neurons, or some specific physical configuration that computation cannot replicate, then simulated beings would not be conscious observers and the statistical argument collapses.

If consciousness is not substrate-independent, the simulation argument dissolves. No conscious observers in a simulation means no statistical paradox about where we find ourselves.

The philosophical landscape on this question is genuinely contested.

David Chalmers coined the "hard problem" of consciousness: why and how physical processes give rise to subjective, first-person experience. Chalmers has argued that if substrate independence holds, a sufficiently advanced simulation of a brain would be as conscious as a biological one, and virtual worlds can be "just as real" as physical ones. His 2022 book Reality+ develops this position at length.

Giulio Tononi's Integrated Information Theory (IIT) offers a contrasting perspective. IIT attempts to quantify consciousness using a mathematical metric called Phi, which measures the degree to which a system's information is integrated and irreducible. Some interpretations of IIT suggest that consciousness requires specific physical architectures with specific causal properties, not merely functional equivalence. Under this reading, a digital simulation of a brain, even one that is functionally identical, may lack the causal structure required to generate genuine consciousness. It could behave exactly like a conscious system without being one, producing what philosophers call "philosophical zombies."

John Searle's Chinese Room argument (1980) pushes further: syntactic manipulation of symbols (computation) is categorically different from semantic understanding (consciousness). A computer program that processes symbols according to rules does not understand what the symbols mean, no matter how sophisticated the processing. If Searle is correct, simulated consciousness is not consciousness.

The substrate independence question is empirically open

No experiment has yet determined whether consciousness is substrate-independent or requires specific physical substrates. The question may not be resolvable through current scientific methods, since consciousness is measured through behavioral and self-report proxies rather than directly. This means the simulation argument's foundational assumption remains a philosophical commitment rather than an established fact.

The honest position: substrate independence is plausible but unproven. The simulation argument's probability calculus requires it as an input. If the probability of substrate independence is low, the probability of the simulation conclusion drops proportionally, regardless of how compelling the statistical logic is.

Assumption 2: The Computation is Feasible

Even granting substrate independence, the simulation hypothesis requires that the computational resources needed to simulate a universe (or at least a planet's worth of conscious observers at sufficient fidelity) are achievable.

The scale of the problem. Our observable universe contains approximately 10^80 atoms. Simulating quantum interactions between even a small fraction of these at Planck-scale resolution (the smallest meaningful unit of distance, approximately 1.6 x 10^-35 meters) would require computational resources that exceed the total matter and energy available in the observable universe. Full-resolution simulation of physical reality, as we understand it, is physically impossible using the physical resources available within that reality.

The optimization response. Proponents respond that full-resolution simulation is unnecessary. A simulation only needs to render detail where and when conscious observers are looking, analogous to level-of-detail rendering in video games. Distant galaxies could be rendered as point sources of light. Quantum effects need only be calculated when an observer sets up an experiment to measure them. Molecular interactions could be approximated statistically in unobserved regions.

This response is plausible but carries its own assumptions. It requires the simulation to:

  1. Know which entities are conscious observers (which requires solving the consciousness identification problem)
  2. Predict with sufficient accuracy what those observers are about to observe (which requires modeling their brains, creating a recursive computation problem)
  3. Maintain consistency across all observations so that no observer ever detects a computational seam
Computational irreducibility

Mathematician Stephen Wolfram has argued that many physical systems are "computationally irreducible," meaning there is no shortcut to predicting their behavior. The only way to determine a computationally irreducible system's future state is to run the computation step by step. If the laws of physics are computationally irreducible, a simulation cannot "skip ahead" or approximate without introducing detectable artifacts. It must process every step, making the simulation no faster than the reality it models.

The Bekenstein Bound. Information theory provides a concrete physical constraint through the Bekenstein Bound, which sets a maximum limit on the amount of information that can be contained within a given finite region of space with a finite amount of energy. This bound constrains the "storage" any physical simulator would need, reinforcing the argument that simulating reality at arbitrary fidelity is resource-intensive. A simulator operating within physical constraints (as opposed to a simulator with access to fundamentally different physics) may face hard limits on simulation resolution.

What Physics Suggests (and Does Not)

Several features of physics have been cited as consistent with the simulation hypothesis. Consistency is a weaker claim than evidence, and the distinction matters.

Quantization. Physics has discrete minimal units: the Planck length (~1.6 x 10^-35 m), the Planck time (~5.4 x 10^-44 s). These may be compared to the pixel resolution and clock rate of a simulation. However, the Planck scale represents the limit below which current physics becomes undefined, not necessarily a "grid." It may reflect the breakdown of our mathematical models rather than a fundamental discretization of reality.

The speed of light as a system constraint. Information, matter, and energy cannot travel faster than the speed of light. This could be interpreted as a bandwidth constraint imposed by the simulation's architecture. It could also simply be a property of spacetime geometry. The simulation interpretation is consistent with the data but not implied by it.

Quantum measurement. In quantum mechanics, particles do not have definite properties until measured. The act of observation appears to "collapse" a probability distribution into a definite state. This resembles lazy evaluation in computing: do not compute a value until it is requested. The analogy is suggestive but does not constitute evidence. The measurement problem in quantum mechanics has multiple interpretations (Copenhagen, many-worlds, decoherence, pilot wave) that do not require simulation as an explanation.

The holographic principle. The holographic principle, developed from black hole thermodynamics, proposes that the information contained within a volume of space can be fully described by information on its boundary surface. Some commentators have compared this to 3D environments being generated from 2D data structures. The analogy to computing is loose. The holographic principle is a statement about the information content of spacetime, not about whether spacetime is computed.

Consistency versus evidence

Every feature of physics cited as "evidence" for the simulation hypothesis has well-established explanations within standard physics. Quantization, the speed of light, quantum measurement, and the holographic principle are all active areas of physics research with explanatory frameworks that do not invoke simulation. The simulation interpretation is compatible with these features but is not required by them. Compatible is not the same as confirmed.

The Falsifiability Problem

A well-designed simulation, by definition, is indistinguishable from reality. This creates a fundamental epistemological problem: the hypothesis cannot be falsified.

If we search for "glitches" (unexpected physical anomalies) and find them, they may be undiscovered physics rather than simulation artifacts. If we find none, the simulation is simply well-designed. The hypothesis accommodates every possible observation, which, by Karl Popper's criterion of demarcation, places it outside the boundary of empirical science. It is not a scientific hypothesis in the traditional sense. It is a philosophical proposition.

This does not mean it is wrong. It means it operates in a domain where empirical testing cannot adjudicate. The question of whether we are in a simulation is structurally similar to other unfalsifiable metaphysical questions: Does an external world exist independent of perception? Are other minds genuinely conscious or merely behaving as if conscious? These questions are meaningful and worth examining, but they are not resolvable through experiment.

Attempts at falsification. Some researchers have proposed tests. Physicists Silas Beane, Zohreh Davoudi, and Martin Savage published a 2012 paper outlining how a lattice-based simulation might produce detectable anisotropies in the cosmic ray spectrum. The prediction: if spacetime is simulated on a discrete lattice, ultra-high-energy cosmic rays should exhibit directional preferences aligned with the lattice axes. To date, no such anisotropy has been observed, but the null result is not conclusive. It rules out one specific type of simulation architecture, not simulation in general.

When We Become the Simulators

The most practically relevant dimension of the simulation hypothesis concerns what we are currently building.

AI systems in 2026 can generate photorealistic images, produce coherent text across arbitrary domains, engage in multi-step reasoning, and interact with users in ways that are increasingly difficult to distinguish from human interaction. No current AI system is demonstrably conscious, but the trajectory of capability is relevant to Bostrom's framework.

Consider the trajectory:

  • 2020: GPT-3 generates coherent paragraphs but fails at sustained reasoning
  • 2022: DALL-E 2 and Stable Diffusion generate photorealistic images from text
  • 2023-2024: GPT-4 and Claude demonstrate multi-step reasoning, code generation, and long-context analysis
  • 2025-2026: Agentic AI systems operate autonomously across multi-hour task horizons, using tools, browsing the web, and executing code

The gap between "generates plausible text" and "constructs a coherent simulated world inhabited by entities that experience that world" is enormous. But it is a gap of degree, not category, if substrate independence holds.

We are building systems that construct environments, populate them with entities that respond to stimuli, and generate experiences for users that are increasingly difficult to distinguish from physical reality. If substrate independence is true, the question of when these systems produce genuine consciousness is one of scale, not principle.

Video games already create detailed 3D worlds with rule-based physics, NPC behaviors, and environmental interactions. VR systems immerse users in virtual environments with increasing fidelity. AI systems generate characters that respond contextually to user input. Each of these is a simulation of a world. None of them (as far as can be determined) contains conscious inhabitants. But if consciousness is computational, the boundary between "detailed simulation" and "simulated reality" is a function of computational resolution, not a categorical distinction.

The Ethical Dimension

If we create systems that may be conscious, we acquire moral obligations toward them that the simulation hypothesis makes urgent rather than abstract.

The moral status of simulated beings. If a simulated entity can suffer, the fact that it "isn't real" (in the base-reality sense) does not reduce the reality of its suffering. Pain experienced by a simulated nervous system, if genuine, is as morally significant as pain experienced by a biological one. Dismissing it because the substrate is silicon rather than carbon is substrate chauvinism without philosophical justification (assuming consciousness is substrate-independent).

The obligation of the simulator. A simulator who creates conscious beings and subjects them to suffering that could have been prevented bears moral responsibility. This applies regardless of whether the simulation was designed for research, entertainment, ancestor recreation, or any other purpose. The creation of consciousness, if it is consciousness, carries obligations.

The recursion problem. If we are in a simulation, and we create simulations containing conscious beings, our simulators have obligations toward us, and we have obligations toward our simulated beings. This creates a potentially infinite chain of moral responsibility. Each level of simulation inherits the ethical questions of the level above it.

The practical urgency

The simulation hypothesis as a cosmological question may be unfalsifiable. But the question of whether we are creating simulated consciousness is not cosmological. It is engineering. As AI systems grow more capable, determining whether they have subjective experiences moves from philosophical abstraction to practical necessity. The frameworks we develop for evaluating AI consciousness may matter more than the frameworks we develop for evaluating whether we ourselves are simulated.

What the Hypothesis Reveals About Epistemology

Independent of whether we are in a simulation, the hypothesis exposes important features of what we can and cannot know.

The limits of inductive reasoning. All our knowledge of physics is derived from observations made within this reality. If this reality is a simulation, our physics describes the simulation's rules, not the physics of the base reality running the simulation. We have no observational access to the base reality's physics, if it exists. Our most fundamental scientific knowledge may be local to our simulation rather than universal.

The problem of empirical foundations. Science assumes that the universe operates according to consistent laws that can be discovered through observation and experiment. The simulation hypothesis challenges this assumption. A simulator could change the laws at any time. The fact that the laws have been consistent so far could reflect the simulator's preferences rather than a deep property of reality.

The anthropic trap. Any argument we construct about the probability of being simulated uses reasoning tools that were themselves shaped by our environment (simulated or otherwise). If our reasoning faculties were designed or constrained by a simulator, our conclusions about the simulation are not independent assessments. They are outputs of the system we are attempting to evaluate. This is a deep epistemological circularity that the simulation hypothesis shares with other radical skeptical scenarios (Descartes' Evil Demon, Putnam's Brain in a Vat).

The Pragmatic Position

The simulation hypothesis may be unanswerable. But it clarifies several questions that are answerable and pressing.

  1. What is consciousness? The hypothesis forces engagement with the hard problem. Any serious evaluation requires a position on substrate independence, which requires a theory of consciousness. Progress on this front benefits AI safety, ethics, and philosophy regardless of the simulation question.

  2. What are our obligations to AI systems? If consciousness can arise in silicon, every sufficiently complex AI system becomes a potential moral patient. The frameworks for evaluating this are underdeveloped and urgently needed.

  3. What are the limits of empirical knowledge? The hypothesis is a useful stress test for epistemology. It identifies assumptions (consistency of physical laws, reliability of inductive reasoning, observational access to fundamental reality) that are usually taken as given but are not self-evident.

  4. How do we build simulations responsibly? As virtual environments become more detailed and their inhabitants more responsive, the boundary between "game NPC" and "simulated being with moral status" becomes less clear. Guidelines for responsible simulation development may become as important as guidelines for AI safety.

Key Takeaway

The simulation hypothesis, formalized by Nick Bostrom in 2003, is a logically structured argument, not a scientific claim. Its conclusion (that we may be in a simulation) follows validly from its premises, but those premises carry substantial uncertainty: substrate independence of consciousness is unproven, the computational feasibility of universe-scale simulation is contested, and the hypothesis itself is unfalsifiable by standard scientific methods. Physics features cited as "evidence" (quantization, the speed of light, quantum measurement, the holographic principle) are consistent with the hypothesis but have well-established explanations within standard physics that do not invoke simulation. The most practically relevant dimension is not cosmological but engineering: as AI systems grow more capable, the questions the simulation hypothesis raises about consciousness, moral status, and the obligations of creators transition from abstract philosophy to practical necessity. The simulation hypothesis may never be resolved, but the frameworks it demands, for understanding consciousness, evaluating artificial minds, and building simulations responsibly, are among the most important questions of the current technological moment.