In 2022, a team of researchers in Melbourne placed a cluster of lab-grown human neurons onto a chip and watched them learn to play Pong. The cells had never been part of a brain. They had been reprogrammed from ordinary skin cells, cultured in a dish, and wired to a grid of electrodes that fed them electrical signals from a simplified video game. Within minutes, the neurons began adjusting their firing patterns to keep the ball in play. No software told them how. They just learned.
Now the company behind that experiment, Cortical Labs, wants to put those neurons to work inside a commercial data center. In partnership with hyperscale operator DayOne Data Centers, Cortical Labs is planning a facility in Singapore that would blend biological neural networks with conventional computing hardware. If it moves forward, the project would mark the first known attempt to deploy living human brain cells as a functional layer of commercial data infrastructure, a direct response to the enormous and growing electricity demands of artificial intelligence.
From Pong to production
The scientific case for this effort rests on a peer-reviewed study published in the journal Neuron in October 2022. In that paper, Cortical Labs researchers showed that in vitro neuronal cultures, both human iPSC-derived and rodent, could learn to play a simplified version of Pong when placed on high-density multielectrode arrays inside a simulated game environment. The experiment, called DishBrain, demonstrated that biological cells could process real-time feedback and adapt their behavior through closed-loop electrical stimulation, without any traditional programming.
That paper attracted attention well beyond neuroscience. Its title described neurons that “learn and exhibit sentience when embodied in a simulated game-world,” a word choice that sparked debate among researchers about whether “sentience” was appropriate for cells in a dish. Regardless of the terminology, the core finding held: living neurons could be steered to perform a task and could improve at it over time.
Since then, Cortical Labs has been building the engineering layer needed to move from a lab bench to something scalable. A preprint published on arXiv describes the CL API and its execution contract, including a reference implementation for the company’s CL1 platform. The API standardizes how external systems deliver stimuli to biological neural networks and record their responses, making interactions reproducible and programmable. As a preprint, the paper has not yet undergone peer review, but its technical claims build logically on the DishBrain foundation.
The infrastructure partner
DayOne Data Centers, the company building the Singapore facility, announced over US$2.0 billion in Series C financing in January 2026 to accelerate its global expansion. DayOne operates hyperscale facilities and has the capital and operational footprint to support a project of this ambition.
Bloomberg has reported that the partnership involves plans for bio-computing data centers in both Singapore and Melbourne, though the full article sits behind a paywall and specific details about timelines, capacity, or technical integration cannot be independently confirmed from that coverage alone. DayOne’s own Series C announcement does not mention Cortical Labs or biological computing.
Singapore is a logical location for a project like this. The city-state has positioned itself as a hub for both AI infrastructure and advanced biotech, with regulatory frameworks that tend to move faster than those in larger jurisdictions. Whether Singapore’s existing biosafety and research-ethics rules are equipped to govern living tissue deployed as commercial compute hardware is another question entirely.
The energy argument
The central pitch for biological computing is power efficiency. Biological neurons are extraordinarily energy-efficient signal processors. A single synaptic event consumes on the order of 10 femtojoules, roughly 10,000 times less energy than a comparable transistor operation. Proponents argue that if biological networks can be scaled and integrated with digital systems, they could dramatically cut the electricity consumption of AI training and inference workloads, which are straining power grids worldwide.
But that comparison comes with heavy caveats. Neurons and transistors do not perform equivalent tasks. A transistor switches between binary states at gigahertz speeds with perfect reliability. A neuron fires in complex, probabilistic patterns that carry information in ways researchers are still working to decode. The energy advantage of biology is real at the cellular level, but translating it into measurable savings at data center scale requires solving problems that no published research has yet addressed.
Cortical Labs has not released benchmarks showing the energy performance of its CL1 platform in any production-like environment. The CL API preprint focuses on timing and ordering semantics for neuron interactions, not on power consumption or throughput. Until independent testing confirms that a hybrid bio-digital system can handle meaningful workloads more efficiently than conventional hardware, the energy case remains theoretical.
Practical and ethical unknowns
Scaling living neurons from a research dish to a data center floor introduces challenges that go well beyond software engineering. Neuronal cultures need precise temperature control, sterile environments, and nutrient supply. They degrade over time. Contamination is a constant risk. Maintaining cell viability over months or years in a facility designed around servers, not biology, would require entirely new operational protocols.
Then there are the interfaces. The multielectrode arrays used in the DishBrain experiment handled small clusters of neurons. Scaling to networks large enough to contribute meaningfully to computing workloads means developing interfaces that can translate between biological signals and digital data at useful speeds and with acceptable error rates. No published work has demonstrated this at the scale a data center would require.
The ethical questions are just as unresolved. The neurons Cortical Labs uses are iPSC-derived, meaning they are reprogrammed from adult cells rather than harvested from embryos or living brains. That avoids some of the most fraught sourcing concerns, but it does not settle the deeper issue. The DishBrain paper showed that neurons can adapt and learn. If biological computing systems are scaled up significantly, the question of whether those systems could develop emergent properties that warrant moral consideration is not hypothetical. It is a question that no regulatory body has formally addressed.
Cortical Labs has not published welfare protocols for neuronal cultures in commercial settings. No government agency has issued guidance specific to living tissue used as an active compute substrate. Existing frameworks for tissue research and organoid studies may offer partial guidance, but they were not designed for this use case. Data governance adds another layer of complexity: if biological systems exhibit stochastic behavior or adaptation, operators will need methods to validate outputs and audit decisions in environments where reproducibility and accountability matter.
Cortical Labs is not alone in this space
Cortical Labs is the most visible company pursuing biological computing for commercial infrastructure, but it is not working in isolation. Researchers at Johns Hopkins University have been developing “organoid intelligence” as a field, exploring whether brain organoids, three-dimensional clusters of neurons grown from stem cells, could eventually serve as biological processors. Intel’s neuromorphic chip program, centered on its Loihi processor, takes a different approach by mimicking the architecture of biological neural networks in silicon rather than using living cells.
These parallel efforts underscore a broader conviction in the research community that conventional chip architectures are approaching practical limits for AI workloads, particularly on energy consumption. Whether the solution turns out to be biological, bio-inspired, or something else entirely, the problem Cortical Labs is trying to solve is real and widely recognized.
What to watch for next
The gap between the DishBrain experiment and a functioning bio-computing data center is vast. Cortical Labs has demonstrated credible science and secured a well-funded infrastructure partner. But the company has not yet released detailed construction timelines, performance data, or ethical frameworks for the Singapore project. DayOne has not publicly confirmed the biological computing component of the partnership.
The milestones that will determine whether this project is a genuine inflection point or an ambitious proof of concept are specific and measurable: independent benchmarks showing energy savings in a hybrid system, published protocols for long-term neuron viability in commercial environments, and regulatory guidance from at least one jurisdiction on the use of living tissue as compute infrastructure.
Until those milestones are met, the Cortical Labs data center plan is best understood as a high-stakes, high-uncertainty experiment at the frontier of computing, one that could reshape how the industry thinks about power and intelligence, or one that could reveal just how far biological systems still are from replacing silicon.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.