When you close your eyes and picture a familiar face, your brain does not conjure the image from scratch. According to research from the California Institute of Technology circulating in spring 2026, many of the same individual neurons that fire when you actually look at an object also fire when you merely imagine it, offering some of the most direct cellular evidence yet that seeing and imagining share the same biological machinery.
The study, led by neuroscientists at Caltech and posted to PubMed Central through the NIH preprint pilot program, recorded activity from individual neurons in the ventral temporal cortex, a strip of brain tissue critical for recognizing objects and faces. About 80 percent of the visually responsive neurons the team tracked followed the same organized firing pattern, called an axis code, whether a patient was looking at a picture or imagining one with nothing on the screen.
“We found that the same neurons that represent objects during perception also represent them during imagery, and they do so using the same coding scheme,” said Rodrigo Quian Quiroga, a neuroscientist at Caltech and one of the study’s senior authors, in the university’s summary of the work.
Recording from neurons one at a time
The recordings came from epilepsy patients who already had electrodes surgically implanted so doctors could locate the source of their seizures. That clinical setup gave the Caltech team something almost impossible to get any other way: direct access to single neurons deep inside a region of the brain that standard imaging tools like fMRI can only measure in broad strokes.
During the experiment, patients viewed images of everyday objects while researchers logged which neurons responded and how strongly. In separate blocks, the patients were cued to imagine those same objects with the screen blank. By comparing the two conditions neuron by neuron, the team could ask a precise question: does the code the brain uses to represent a coffee mug or a face survive when the object is no longer physically present?
For roughly four out of five visually responsive cells, the answer was yes. The distributed pattern of activity that distinguished one object category from another during perception reappeared during imagination, though at a lower intensity.
Building on decades of work
The idea that perception and imagery overlap in the brain is not new. A landmark 2000 study published in Nature first documented individual neurons in the human medial temporal lobe that responded during deliberate imagination. That earlier work established the principle; the new Caltech research extends it in important ways.
The current study focuses on a different brain region, the ventral temporal cortex, which sits at the heart of the brain’s object-recognition pipeline. It also moves beyond cataloging a handful of striking “imagery cells” to analyzing how an entire population of neurons encodes information. The shift from isolated examples to a population-level coding framework, where roughly 80 percent of cells share the same representational geometry for seen and imagined objects, marks a meaningful advance in how scientists understand mental imagery at the cellular level.
A related doctoral thesis from the same Caltech research program adds nuance. Not every neuron behaves the same way: some cells reactivate for both perception and imagery, while a smaller subset responds only during imagination, hinting that the brain may also maintain dedicated circuitry for internally generated images. Any complete account of mental imagery will need to explain both populations.
What the findings do not yet explain
Several caveats temper the conclusions. The manuscript has been revised multiple times (it is currently on version 4) but has not yet completed formal journal peer review. Until it does, the reported statistics and interpretations should be considered provisional, even though the underlying methods appear rigorous and the NIH preprint pilot listing provides a degree of institutional vetting.
The patient population also limits generalizability. People undergoing epilepsy surgery may have atypical brain organization due to years of seizure activity or medication. This is a well-known constraint across all invasive human neuroscience, not a flaw unique to this study, but it means the 80 percent figure cannot be assumed to hold in healthy adults without confirmation from other methods or clinical contexts.
Perhaps the most intriguing open question involves the intensity gap. Neurons fired less vigorously during imagery than during actual viewing. That difference raises a natural follow-up: what determines how vivid a person’s mental pictures are? Some people report almost no capacity for voluntary visual imagery, a condition known as aphantasia, while others describe images nearly as sharp as real sight. The current data do not explain that variation, and the study did not include systematic measures linking subjective vividness to specific firing rates. Whether reduced firing in shared neurons actually accounts for conditions like aphantasia remains an open and untested hypothesis rather than a conclusion supported by this research.
The experiments also used static images of objects under controlled laboratory conditions. Whether the same shared-code principle applies to more complex mental experiences, such as replaying a memory of a moving scene, imagining music, or visualizing something never seen before, remains untested.
Why it matters beyond the lab
If the brain genuinely recycles its visual processing hardware to build mental images, the implications could reach well beyond basic neuroscience. Engineers developing brain-computer interfaces, for example, might potentially decode imagined objects from neural activity using the same algorithms trained on perception data, a shortcut that would be impossible if imagination relied on entirely separate circuits.
Some researchers have speculated that understanding shared visual neurons could eventually shed light on clinical conditions where imagery is absent or intrusive, from aphantasia to the vivid flashbacks associated with post-traumatic stress. However, the Caltech study did not examine either condition, and any link between its findings and those clinical phenomena remains hypothetical. Single-neuron data of this kind does provide a more precise foundation than earlier imaging studies that could only measure aggregate activity across millions of cells, but translating that precision into clinical insight will require dedicated research in those specific populations.
The Caltech summary of the research frames the results in accessible terms: the brain does not maintain entirely separate circuits for real vision and imagined vision. Instead, the same population of neurons encodes object identity regardless of whether the object is physically present. Imagining a face is not merely “thinking about” a face; it is partially re-running the recognition system that would respond if the face were actually in view.
The intensity gap shared neurons leave open
The gap between perception-strength and imagery-strength firing in these shared neurons may be the most productive thread for future research. If the same cells encode both real and imagined objects but at different intensities, the mechanisms that modulate that intensity, such as attention, feedback from frontal brain regions, or fluctuations in neurochemical state, could hold clues to why some mental images feel crisp and controllable while others are faint or fleeting.
Answering that question will likely require pairing single-neuron recordings with richer behavioral measures and, ideally, tracking how imagery and its neural signatures change over time within individuals. For now, the Caltech work provides a detailed snapshot: when we picture an object in our mind’s eye, many of the very neurons that would respond to actually seeing it are quietly firing along the same axes, sketching an internal version of the outside world.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.