A cluster of rat neurons, grown on a chip in a Japanese laboratory, just learned to generate a sine wave on command. Across the Pacific, a team of American engineers wrapped a soft electronic mesh around a tiny ball of lab-grown human brain cells and taught it to recognize patterns. Neither group set out to build a conventional computer. What they built instead may be something more consequential: the first working prototypes of machines that compute with living tissue.
The two projects, published in early 2026 in the Proceedings of the National Academy of Sciences and Nature Electronics respectively, represent the most concrete evidence yet that biological neural networks can be harnessed for controlled information processing. They also expose how far the field still has to go before “biocomputers” move from laboratory curiosities to practical tools.
Rat neurons that learn in real time
At Tohoku University in Sendai, Japan, researchers cultured rat cortical neurons on a chip etched with microfluidic channels. Those channels are the critical design choice: they guide how neurons physically connect to one another, creating modular clusters rather than a single dense tangle. Without that structure, densely packed neurons tend to fire in lockstep, producing uniform activity that is computationally useless. The microfluidic architecture breaks that synchronization, preserving the rich, varied dynamics a network needs to process information.
The team then layered a machine-learning framework on top of the living culture. The approach, called reservoir computing paired with FORCE learning, treats the neuron network as a dynamic “reservoir” that transforms input signals into high-dimensional representations. A trained readout layer extracts useful patterns from that reservoir in real time. Under this feedback loop, the neurons generated temporal signals on demand: sine waves, triangle waves, square waves, and even chaotic Lorenz-attractor trajectories.
“This allows biological neural networks to learn temporal patterns online under feedback control,” said Hideaki Yamamoto, a lead researcher on the project. The real-time adaptability is the point. Unlike a silicon chip running fixed logic, the living network adjusts its behavior continuously as conditions change.
The results, detailed in a peer-reviewed paper (DOI: 10.1073/pnas.2521560123), mark the first demonstration that cultured neurons can be trained through a closed-loop machine-learning system to produce specific, complex outputs.
A 3D mesh for human ‘mini brains’
The Northwestern University project tackles a different piece of the puzzle: the physical interface between electronics and living brain tissue.
Rather than growing neurons flat on a surface, the Northwestern team worked with human neural organoids, three-dimensional clusters of neurons derived from stem cells that self-organize into structures loosely resembling early brain tissue. (Many neuroscientists avoid the popular shorthand “mini brains” because it overstates the organoids’ complexity, but the term persists in public discussion.)
The challenge with organoids is access. A flat electrode array can only read from the bottom surface. To reach neurons throughout the volume, the team built a soft, stretchable 3D electronic mesh studded with hundreds of electrodes that conforms to the organoid’s curved surface. The mesh can both stimulate and record neural activity across the structure, providing near-complete coverage.
Using this interface, the researchers demonstrated programmable computation: the organoid-mesh system performed pattern-recognition tasks involving spatial and temporal pulse sequences. The work, published in Nature Electronics and described in a Northwestern engineering faculty report, shows that a three-dimensional biological structure can process information when paired with the right hardware.
Two problems, not yet one solution
It is tempting to merge these results into a single narrative about “brain computers,” but the two projects have not been combined. No joint experiments, shared data, or collaborative publications between the Tohoku and Northwestern teams appear in any available primary source.
The distinction between their contributions matters. Tohoku solved a software-and-wetware integration problem: how to train living neurons to produce specific outputs through a machine-learning loop. Northwestern solved a hardware-and-biology integration problem: how to build a physical interface that reads from and writes to a three-dimensional living structure with high resolution. Both capabilities are necessary for functional biocomputers, but pairing microfluidic-guided neuron networks with conformal 3D meshes remains an untested next step.
The two systems also use different biological material. Tohoku’s reservoir runs on rat cortical neurons; Northwestern’s organoids are derived from human stem cells. Whether one cell type outperforms the other for specific computational tasks is an open question. Differences in connectivity patterns, developmental timelines, and responsiveness to electrical stimulation could all shape performance, but no comparative data exists in the published literature.
The gaps that remain
Neither paper reports how long the living systems can operate before the neurons degrade. Biological cells need nutrients, waste removal, and stable temperatures. Lab demonstrations can run for hours or days, but no published scalability metrics or cost analyses from either group address whether a biocomputer could function for weeks or months. Without that data, any talk of replacing or supplementing silicon chips for real-world AI workloads is premature.
The energy-efficiency argument, frequently repeated in secondary coverage, also lacks hard numbers in these primary papers. The human brain runs on roughly 20 watts, a fraction of what a modern GPU cluster consumes. But maintaining living tissue in a lab requires pumps, incubators, and supporting electronics, all of which draw power. Until researchers publish end-to-end energy budgets that include those overhead costs, the claim that biological computing will dramatically cut power consumption remains informed speculation, not established fact.
There is also a transparency problem familiar from conventional AI. Both groups show they can drive and record neural activity well enough to perform specific tasks, but neither fully understands how the networks implement those computations internally. The systems function as black boxes: inputs go in, outputs come out, and learning algorithms tune the interface to get the desired result. That limits the ability to predict how biological systems will scale or behave under new conditions.
Finally, computing with human neural tissue raises ethical questions that existing regulatory frameworks were not built to answer. Human organoids are already used in disease modeling and drug testing, but repurposing them as information processors is a different proposition. Questions about consent, the potential for rudimentary sentience, and the moral status of organoids used primarily for computation are being raised in bioethics commentary, though no formal regulatory guidance specific to organoid-based computing has been published as of May 2026.
What the science actually supports
Stripped of hype, the verified results show that small networks of living neurons, both rat and human-derived, can be harnessed for controlled information processing when coupled to carefully designed interfaces and learning algorithms. The Tohoku team proved that cultured neurons can generate specific signal patterns under real-time feedback. The Northwestern team proved that a 3D mesh can interface with a human organoid to perform pattern recognition. Both are real, measured, peer-reviewed results.
The leap from those results to “biocomputers could replace data centers” or “living chips will power robots” is not supported by current evidence. That would require years of additional engineering, biological research, cost analysis, and regulatory work that has not yet begun in earnest.
What these two papers do establish is a proof of principle that was, until recently, theoretical. Living neurons can compute under human direction, and engineers can now build interfaces precise enough to make that computation programmable. The field has crossed from “Can it be done?” to “How far can it go?” Answering that second question is the work ahead.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.