Cortical Labs, the Australian biotech startup that wired living neurons to a video game, now faces a distinctly biological constraint as it scales its technology: the cells powering its computers need their nutrient fluid replaced regularly to stay alive and functional. Peer-reviewed research describing the company’s CL-1 platform makes clear that media changes are part of routine operation, though the public scientific descriptions do not pin down a strict “daily” schedule. Silicon chips run for years without feeding. Brain cells do not.
The challenge matters because Cortical Labs is not a curiosity confined to a single lab bench. The company has moved toward deploying neuron-based computing hardware in data center settings, and the biological upkeep those systems demand could determine whether “wetware” computing becomes practical or remains a promising but fragile experiment.
From Pong to the CL-1 Device
The scientific foundation for Cortical Labs’ work traces back to a 2022 experiment in which roughly 800,000 cultured neurons learned to play Pong. That study, published in the journal Neuron, described a closed-loop system called DishBrain. Researchers placed neuronal cultures on multi-electrode arrays, or MEAs, that could both stimulate the cells with electrical signals representing the game state and record the cells’ responses as paddle movements. The neurons received structured feedback: predictable signals when they hit the ball, random noise when they missed. Over time, the cultures adapted their firing patterns to improve performance.
Brett Kagan, Cortical Labs’ chief scientist, and University College London professor Karl Friston were among the collaborators on the DishBrain research. Kagan described the work as a step toward understanding how intelligence emerges from neural activity, according to UCL’s account of the project. Friston, known for his free energy principle in neuroscience, contributed theoretical framing that helped explain why the neurons appeared to minimize unpredictable input, a behavior consistent with basic learning.
The Pong experiment served as a proof of concept. The peer-reviewed Neuron article established the foundational methodology: living neurons could interact with a digital environment through electrical feedback, adjusting their behavior in ways that looked like goal-directed learning. The system was not a computer in any conventional sense, but it demonstrated that biological tissue could process information in a closed loop with silicon hardware.
What the CL-1 Requires to Stay Alive
Building on DishBrain, Cortical Labs developed the CL-1, a device designed to host neuronal cultures for sustained computation. A methods paper in the Cell Press journal Patterns describes the practical realities of running such a system. Images and experimental data in that paper were generated using a CL-1 v0.8 platform, and the authors detail the protocols for maintaining the living cells that serve as the system’s processing substrate.
The central operational demand is straightforward but relentless: neuronal cultures require their growth media changed on a regular basis. Growth media is the nutrient-rich fluid that supplies glucose, amino acids, growth factors, and other molecules that neurons need to survive and maintain synaptic connections. Without fresh media, waste products accumulate, pH levels shift, and cells begin to die. The Patterns paper discusses this maintenance requirement as a core part of running a biological intelligence lab, treating it not as an afterthought but as an essential protocol.
This is where the headline claim meets a reporting gap worth flagging. While the Patterns study confirms that regular media changes are necessary for neuronal culture maintenance on the CL-1, it does not specify the exact frequency in publicly available sections. The characterization of “daily” swaps is not explicitly stated in the publicly available sections of the cited papers; it aligns with common cell-culture practice where many neuronal protocols call for media replacement roughly every 24 to 48 hours. No primary Cortical Labs product manual or official statement publicly confirms a strict daily cadence. The requirement is real; the precise timing remains somewhat ambiguous based on available primary sources.
Why Biology Clashes with Data Center Logic
Traditional computing hardware is designed around uptime. A well-maintained server rack can run continuously for years, needing only electricity, cooling, and occasional component replacement. Biological computing inverts that model. The neurons on a CL-1 chip are alive, and living systems degrade without active care. This is not a bug to be engineered away with a software patch. It is a fundamental property of the substrate.
That tension becomes acute when the conversation shifts from lab experiments to commercial deployment. Bloomberg reported on Cortical Labs’ ambitions to operate neuron-based computing in data center environments in Singapore and Melbourne. Scaling from a single research device to a facility housing many such systems multiplies the maintenance burden. Each culture needs its own fluid supply, its own sterile handling, and its own monitoring for cell health. A data center with hundreds of biological chips would need something closer to a hospital’s logistics than a traditional server farm’s.
The most promising path to easing this burden is automated perfusion, a system that continuously or periodically pumps fresh media through the culture chamber and removes waste without human hands touching the device. Microfluidic perfusion systems already exist in biomedical research, and adapting them for biocomputing is a plausible engineering goal. But “plausible” and “proven at scale” are different things. The peer-reviewed papers cited here describe culture maintenance and the need for media changes, but they do not provide detailed, CL-1-specific reliability, cost, or failure-rate results for automated perfusion at data-center scale in the indexed biomedical literature referenced above.
The Energy Argument and Its Limits
Proponents of biological computing often point to energy efficiency as the key advantage. The human brain runs on roughly 20 watts while performing tasks that still challenge supercomputers, and neuron-based systems could, in theory, inherit some of that efficiency. Cortical Labs has emphasized this angle in public comments, suggesting that wetware accelerators might eventually handle certain machine-learning workloads at a fraction of the power cost of GPUs.
However, the energy comparison becomes murkier once biological overhead is included. Keeping neurons alive requires not just electricity for the MEA and control electronics, but also incubators, pumps, environmental monitoring, and the production and delivery of sterile media. If technicians must regularly swap fluids by hand, their labor and the infrastructure to support it become part of the true operating cost. Even with automation, perfusion systems introduce pumps, valves, and sensors that consume power and add failure modes.
None of this negates the potential of biological computation. Instead, it reframes the energy argument as a systems-level question. Any fair comparison has to account for the full stack: from the neurons and electrodes up through the incubators, fluidics, and human oversight. At present, the scientific record offers detailed descriptions of CL-1 culture protocols but does not yet provide audited, side-by-side measurements of energy per useful computation against conventional hardware.
Scaling Wetware Responsibly
As Cortical Labs moves from headline-grabbing Pong demos to commercial deployments, transparency around these operational constraints will matter as much as performance benchmarks. Investors and potential customers need to understand not only how well neuron-based systems learn, but also how often they must be fed, how sensitive they are to contamination, and what happens when a culture fails mid-task.
The existing peer-reviewed work provides a starting point. The Neuron and Patterns papers show that the company’s technology can support closed-loop learning and sustained culture viability under laboratory conditions. What remains largely uncharted is the transition from carefully tended dishes to industrial-scale biocomputing. That gap spans not just engineering questions about perfusion and monitoring, but also ethical and regulatory issues around using human-derived cells in commercial infrastructure.
Future research will likely expand beyond isolated culture protocols into comparative studies of reliability, lifecycle costs, and environmental impact. Tools like personal NCBI research dashboards and curated bibliography collections already make it easier for scientists and policymakers to track emerging data in this fast-moving field. As more results accumulate, the practical picture of biological computing should sharpen.
For now, the CL-1’s need for regular media changes is more than a laboratory footnote. It is a reminder that when computing crosses the boundary into living tissue, uptime becomes a matter of cellular health, not just system redundancy. Whether Cortical Labs can translate that reality into a robust, scalable platform will determine if neuron-powered data centers evolve from speculative renderings into a durable part of the computing landscape.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.