For more than a decade, neuroscientists had recordings from monkey brains learning to sort visual patterns. They had analyzed the data, published landmark papers, and moved on. Then a computer model built at MIT told them they had missed something.
In a study published in Nature Communications in May 2025, researchers describe a biologically detailed simulation of the brain circuit connecting the prefrontal cortex and the striatum. When they ran it through a learning task originally designed for macaques, the model matched the animals’ behavior on its own. More striking: it predicted a population of neurons that fire hardest right before an error, a pattern no one had identified in the real recordings. When the team went back and checked, the neurons were there all along.
A model wired like a brain, not trained like a chatbot
Most AI models used in neuroscience today are deep-learning systems trained on massive datasets. They can match brain activity with impressive accuracy, but their internal wiring bears little resemblance to actual neural circuits. Approaches such as recurrent neural networks trained on cognitive tasks, or large-scale spiking network simulators like the Human Brain Project’s models, have advanced the field but typically prioritize data fitting over architectural fidelity to specific circuits. The MIT team took a different path.
Their model simulates micro-assemblies of excitatory and inhibitory neurons, uses dopamine-driven learning rules, and respects realistic synaptic timing. It produces spiking activity and local field potential signals, the same electrical outputs that neuroscientists measure with electrodes implanted in living brains. The architecture was not reverse-engineered from data. It was built from known biology of the corticostriatal circuit, meaning the same brain regions studied in the macaque experiments informed the model’s design, even though the model was never numerically fitted to the animals’ recorded activity.
The task it learned came from a well-established experiment. In the original study, macaques watched clouds of moving dots and learned to assign them to one of two categories by shifting their gaze left or right, a paradigm described in a 2011 Neuron paper. Electrodes in the lateral prefrontal cortex and dorsal striatum captured how individual neurons changed their firing as the animals gradually figured out the rule. A follow-up study, also in Neuron, showed that beta-band synchronization between those two regions increased as learning progressed, reflecting growing coordination.
The biomimetic model replicated both signatures spontaneously. Its learning curves tracked the animals’ behavioral improvement. Its simulated cortical and striatal populations developed category-selective firing. And it generated the same rise in beta-band oscillatory synchrony. None of this was programmed in. It emerged from the biological design constraints and the structure of the task.
The neurons that encode “I’m about to get this wrong”
The most consequential finding was something the model produced that nobody expected. A distinct cluster of simulated neurons fired most strongly when the model was about to miscategorize a stimulus. The researchers called them “incongruent” neurons, cells that essentially encode a prediction of failure rather than success.
This pattern had never been reported in the original macaque studies. But when the MIT team, led by senior author Earl Miller, returned to the existing animal recordings with this specific prediction, they found the same incongruent neurons in the real data.
According to a summary from MIT’s Picower Institute for Learning and Memory, Miller noted that these error-predictive signals had gone unnoticed through more than a decade of analysis. The neurons were always present in the recordings. No one had looked for them because no theory predicted they should exist.
That sequence matters scientifically. The model did not just reproduce known results. It generated a novel, testable hypothesis, and the hypothesis held up against real neural data collected years earlier.
Grace Lindsay, a computational neuroscientist at New York University who was not involved in the study, said the work illustrates a growing recognition in the field that biologically constrained models can do more than summarize data. “The value of a model is not just in fitting what you already know but in telling you where to look next,” Lindsay told MIT News in a related discussion of biomimetic modeling approaches. Independent researchers have not yet published formal commentary on the incongruent-neuron finding specifically, so the broader community’s assessment is still taking shape.
What has not been settled
The confirmation is striking, but it comes with important caveats. The re-analysis that found incongruent neurons relied on recordings from the original experiments. No new animal studies have been conducted to replicate the finding with fresh data or different task designs. The validation, in other words, draws from the same dataset that informed the model’s broader architecture, even though the specific error-related pattern was not used to build or tune the simulation.
The functional role of these neurons is also unclear. The model shows they exist and that they track with incorrect categorization, but whether they actively drive error correction or simply reflect computations happening elsewhere in the circuit has not been established. Causal experiments, such as selectively silencing these neurons and measuring the effect on learning, have not been performed.
There is also the question of generality. The current evidence comes from one type of task: binary categorization of visual motion stimuli. Whether similar error-predictive cells appear during probabilistic reward learning, motor habit formation, or abstract rule switching is unknown. The balance of incongruent neurons across prefrontal and striatal populations, and how their activity interacts with dopamine signaling, has not been mapped in living animals.
And the model itself has been validated against only one experimental paradigm. The corticostriatal circuit it simulates is involved in many forms of learning, from habits to reward-based decisions. Whether the same architecture can predict novel phenomena in other contexts is an open empirical question that will require testing against new recordings from other laboratories.
Why the method may matter more than the finding
Beyond the incongruent neurons themselves, the study offers a proof of concept for a different way of doing computational neuroscience. The standard approach uses flexible AI models to fit brain data after the fact. The MIT team’s approach flips that: build a model from biological principles first, let it run, and see what it predicts that you have not already found.
If this strategy works repeatedly, it could open a feedback loop between simulation and experiment. Old datasets, some sitting in lab archives for years, might be revisited with model-driven questions. Signals related to learning, attention, or disease that were previously averaged away or dismissed as noise could surface.
That is a big “if.” One successful prediction does not establish a track record. But the logic is compelling: a model constrained by real anatomy and physiology generates hypotheses that are specific enough to be surprising and testable enough to be wrong. In this case, it was right, and it pointed neuroscientists toward something hiding in plain sight in their own data.
What replication would need to look like
The strength of the incongruent-neuron claim will ultimately depend on independent verification: new recordings in different animals, different tasks, and ideally different laboratories. Equally important is whether the broader strategy of using biomimetic models as discovery engines can produce additional confirmed predictions. For now, the MIT work suggests that building AI to think less like a machine and more like a brain may be one productive path toward understanding what brains are actually doing.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.