Cortical Labs connected roughly 800,000 lab-grown human neurons to a high-density electrode array and let them interact with a simplified version of Doom, extending a line of research that began with the classic game Pong. The demonstration builds on a peer-reviewed study showing that biological neurons in a dish can adapt their firing patterns in response to electrical feedback from a game environment. While headlines about “neurons playing video games” invite easy sensationalism, the underlying science raises a harder question: whether living brain cells on silicon chips could eventually outperform conventional AI in speed and energy efficiency for certain tasks.
How Neurons on a Chip Actually Work
The foundational experiment, known as DishBrain, placed neuronal cultures onto a high-density multielectrode array and embedded them in a closed-loop system that simulated a simplified Pong-like game. The setup, described in Neuron, used both human induced pluripotent stem cell (iPSC)-derived neurons and embryonic mouse neurons, depending on the experimental condition. Electrodes fed the neurons information about the ball’s position, and the cells’ electrical activity was translated into paddle movements. When the paddle missed, the system delivered random, unpredictable stimulation. When it hit, the feedback was structured and predictable.
That distinction matters. The neurons were not “rewarded” the way a dog gets a treat. Instead, the system exploited a principle from neuroscience: biological neural networks tend to minimize unpredictable input. Over repeated sessions, the cultures shifted their firing patterns to keep the paddle aligned with the ball, reducing the chaotic stimulation they received. The result was measurable improvement in gameplay, though far from expert-level performance.
External observers quickly picked up on the work. A summary from University College London emphasized that the neurons were guided by the frequency of signals representing the ball’s location, not by any conscious understanding of Pong. This framing highlights what is actually being tested: the capacity of disembodied neural tissue to organize its activity in response to structured, information-rich input.
From Pong to Doom: What Changed
The Doom demonstration represents a step up in complexity. Pong involves a single axis of movement and a predictable ball trajectory. Doom, even in a stripped-down form, requires navigation through a three-dimensional space with enemies and obstacles. Cortical Labs adapted the same closed-loop feedback principle from its earlier work, routing sensory data from the game environment to the neuron array and reading motor-intent signals back out.
However, no peer-reviewed paper or official preprint has yet detailed the Doom-specific protocol, performance metrics, or learning outcomes. The available evidence for the Doom extension comes from institutional press materials and secondary reporting rather than primary experimental data. That gap is significant. Without published methodology and independent replication, claims about what the neurons “learned” in Doom should be treated as preliminary. The Pong results, by contrast, have been through formal peer review and include detailed statistical analysis of learning curves in the original Neuron article.
Biological Neurons vs. Deep Reinforcement Learning
One of the strongest claims from this research program is that biological neurons learn faster than artificial intelligence agents when given the same amount of experience. A 2024 preprint on arXiv directly benchmarked DishBrain-style learning against deep reinforcement learning (RL) algorithms in a Pong-like environment. The comparison focused on sample efficiency, meaning how many interactions with the game each system needed before showing improvement.
Under time-matched constraints, the biological neurons showed competitive or superior sample efficiency compared to standard deep RL agents. That finding, if it holds up in peer review, has real implications. Modern AI systems like those powering large language models and game-playing bots require enormous computational resources and energy to train. A biological system that reaches useful performance with fewer training samples could, in theory, reduce the cost and environmental footprint of certain machine-learning tasks.
The catch is scope. Deep RL agents can be scaled, copied, and deployed across millions of servers. A dish of neurons cannot. The comparison is informative for basic science but does not yet translate into a practical engineering advantage. It instead suggests that evolution’s solution to information processing (wet, adaptive neural tissue) still has efficiency properties that our best digital architectures struggle to match.
Energy, Ethics, and the Data Center Question
The energy angle is where this research intersects with a pressing industrial problem. AI data centers consume vast amounts of electricity, and demand is growing as companies race to deploy larger models. Reporting from Bloomberg describes plans to build facilities in Singapore and Melbourne that would integrate human brain cells into computing infrastructure, signaling that Cortical Labs and its partners see a commercial future for biological hardware.
The human brain runs on about 20 watts of power, roughly equivalent to a dim light bulb, while performing tasks that require warehouse-scale computing infrastructure to approximate digitally. If even a fraction of that efficiency could be captured in a chip-based biological system, the energy savings would be substantial. But the gap between a laboratory demonstration with 800,000 neurons and a functioning data center is enormous. Scaling biological systems introduces challenges in cell viability, contamination, temperature control, nutrient delivery, and long-term stability that silicon chips solved decades ago.
Then there is the ethical dimension. Commentary in Nature on the original Neuron paper highlighted the distance between what was demonstrated and what remains speculative, particularly around claims of “sentience” in the paper’s title. The peer-reviewed study showed that neurons adjusted their behavior in response to structured feedback. Whether that constitutes learning, intelligence, or anything resembling awareness is a separate and much harder question.
Those questions are not purely theoretical if brain-based chips move toward commercialization. Using human-derived brain tissue for computation raises issues of consent, ownership, and potential moral status. A searchable body of work by science journalist Heidi Ledford has often explored how fast-moving biotechnologies outpace existing norms, and DishBrain-style systems fit that pattern. Regulators will have to decide whether cultured neurons used for information processing should be governed like organoids, like medical tissue samples, or like ordinary hardware.
Adding another layer, some readers attempting to access Nature’s discussion of these experiments encounter a publisher login gateway, a reminder that much of the ethical and technical debate still sits behind paywalls. That limits broader public engagement with questions that could soon have real-world policy consequences.
What the Coverage Gets Wrong
Most reporting on this topic frames the story as “brain cells learned to play a video game,” which is technically accurate but misleading in emphasis. The neurons did not decide to play Pong or Doom. They were embedded in a system that translated their electrical activity into game inputs and fed game outputs back as electrical stimulation. The “learning” observed is a shift in population-level firing patterns that happens to improve game performance, not a conscious decision to get better at a task.
That distinction is not just academic. Overstating what the cultures are doing can feed public confusion about both AI and neuroscience. When headlines suggest that a petri dish is “sentient,” they blur the line between adaptive behavior and subjective experience. The DishBrain work shows that relatively small networks of neurons can exploit structured feedback to reduce unpredictability in their inputs. It does not show that those neurons have goals, feelings, or awareness.
More careful coverage also matters for setting realistic expectations about biological computing. The same experiments that reveal remarkable efficiency also expose fragility: these cultures require constant monitoring, specialized media, and precise environmental control. They drift over time, change their connectivity, and eventually die. Any attempt to turn them into commercial processors will have to confront maintenance and reliability issues that do not exist for silicon.
Where the Research Might Go Next
In the near term, DishBrain-style systems are likely to remain tools for basic neuroscience and unconventional computing research rather than replacements for GPUs. They offer a controllable platform for testing theories about learning, prediction, and network dynamics in living tissue. They also serve as a provocative benchmark for AI, forcing researchers to ask why massive digital models sometimes need so much more data to reach comparable performance on simple tasks.
For Doom and other complex games, the most important next step is transparency. Peer-reviewed protocols, quantitative performance metrics, and open data would allow independent labs to test whether these results are robust or idiosyncratic. Only then will it be possible to say whether neurons on a chip have an enduring edge in sample efficiency, or whether their apparent advantage in early experiments reflects clever task design and small-sample statistics.
In the meantime, the safest way to think about neurons playing Doom is as a vivid demonstration of a deeper idea: that the boundary between biological and artificial computation is becoming more porous. Whether that leads to greener data centers, new models of intelligence, or just a better understanding of how brains learn will depend less on splashy game demos and more on the slow, careful work of turning sensational claims into reproducible science.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.