Image by Freepik

Engineers are starting to build hardware that does not just run artificial intelligence, it behaves like a primitive form of it. Instead of long chains of conventional transistors, researchers are experimenting with intelligent materials and brain-like transistors that can store information, adapt to patterns, and even learn from experience. The result is a new class of computing components that blur the line between processor and memory, and between machine and biology.

These devices are still early, but they hint at a future where tiny molecules, layered crystals, and synaptic switches could handle tasks that currently demand sprawling data centers. If they scale, they could make AI more efficient, more embedded in everyday objects, and more reminiscent of the way neurons work inside a human brain.

Why engineers want hardware that thinks like neurons

Modern AI runs on hardware that was never designed to think the way brains do. A typical processor shuttles data back and forth between separate memory and logic units, burning energy every time it moves a number from one place to another. By contrast, neurons in the cortex store and process information in the same structures, which is why the human brain can juggle perception, language, and movement while using roughly the power of a dim light bulb. As one overview notes, the brain, Weighing only about three pounds, can process information thousands of times more efficiently than today’s chips.

That efficiency gap is why neuromorphic engineers are trying to make computer chips act more like brain cells. Instead of treating transistors as simple on off switches, they are building devices whose electrical behavior changes with experience, much like synapses that strengthen or weaken as we learn. Researchers working on this approach argue that it could support prosthetic limbs that respond directly to neural activity, or displays that adapt to brain signals, by letting hardware interpret patterns in real time rather than shipping every signal to a distant server, an ambition highlighted in work on making computer chips act more like brain cells.

From simple switches to synaptic transistors

To understand how radical this shift is, it helps to remember what a conventional transistor does. In its classic form, a transistor is a controllable valve for current, turning a signal on or off depending on the voltage at its gate. Billions of these simple elements, arranged in logic gates and memory cells, underpin everything from smartphones to supercomputers. They are fast and reliable, but they do not remember past inputs unless extra circuitry is added, and they certainly do not adapt their behavior based on patterns they have seen.

Neuromorphic devices try to fold that missing adaptability into the transistor itself. One line of research has produced synaptic transistors whose conductance changes gradually, mimicking the way synapses adjust their strength. In work described as a synaptic transistor, the device achieves concurrent memory and information processing functionality, a key step toward hardware that behaves more like a neural network than a digital circuit. Instead of separating storage and computation, the same physical channel both holds a learned weight and applies it to new inputs.

An intelligent material that learns without software

The most provocative experiments go a step further, treating the material itself as a computing substrate. Rather than wiring up millions of identical transistors, researchers are exploring tiny molecules that can think, remember, and learn when they are arranged in the right structure. In one project, described as an intelligent material, the building blocks are not rigid logic gates but responsive molecules that change their state based on electrical stimuli and past activity.

In that work, the researchers describe how Dec, What, and Tiny are central to the way they frame the discovery, emphasizing that the material behaves less like a static circuit and more like a living network of connections. Instead of programming it line by line, they expose it to patterns and let its internal configuration settle into a state that reflects what it has experienced. The result is a physical system that can perform tasks such as pattern recognition or associative recall without a separate layer of software, because the computation is effectively baked into the evolving arrangement of its molecules.

Graphene moiré patterns as artificial synapses

Another route to brain-like behavior uses carefully stacked crystals rather than organic molecules. In one study, researchers built a synaptic transistor from a moiré material, an asymmetric structure made up of two layers of graphene and a layer of hexagonal boron nitride, often abbreviated as hBN. By twisting and layering these sheets, they created a landscape where electrons move in complex, tunable ways, allowing the device to emulate the gradual, analog changes in synaptic strength that are essential for learning.

The team reports that this moiré-based device behaves like a synapse for neuromorphic computing, adjusting its conductance in response to voltage pulses in a way that resembles how neurons encode experience. Because the structure is defined at the atomic scale, it offers a path to extremely dense networks of artificial synapses that could be integrated directly onto chips. The work, detailed in Nature, suggests that layered graphene and hBN could become a platform for hardware neural networks that are both compact and energy efficient.

Learning transistors with built‑in memory

Some of the most concrete demonstrations of brain-like behavior come from devices that explicitly combine learning and memory in a single transistor. At Linköping University, researchers developed a learning transistor that has the ability to learn and is equipped with both short-term and long-term memory. They describe the work as a major step on the road toward a computer that mimics the human brain, because the same component can respond quickly to new inputs while also retaining a more durable record of past activity.

In that project, the team emphasizes that, until now, brains have been the only systems where learning and memory are so tightly intertwined, while conventional electronics have kept them separate. Their transistor changes its output signal depending on the history of the input, and under some conditions the output signal becomes larger as the device is trained. This behavior, documented in the learning transistor study, shows how a single nanoscale element can embody both rapid adaptation and longer term storage, much like a biological synapse that exhibits both short-term facilitation and long-term potentiation.

Associative intelligence on a single device

Other teams have pushed these ideas into more explicitly cognitive territory, building transistors that can perform associative learning tasks usually reserved for software neural networks. In one brain-like transistor, researchers showed that the device could carry out energy-efficient associative learning at room temperature, a milestone because previous similar devices could only operate under more constrained conditions. They describe how the Transistor performs this learning by adjusting its internal state in response to input patterns, while Previous designs lacked that flexibility or required extreme environments.

A related experiment, described in a separate report, walks through a simple but telling test. First the scientists showed the device one pattern, 000, three zeros in a row, and trained it to recognize that sequence. Then they asked the AI to identify similar patterns, and the transistor responded in a way that indicated it had generalized from its training rather than memorizing a single example. This behavior, documented in the brain-like transistor report and expanded in a companion description of how the Transistor mimics human intelligence, shows that a single device can support higher-level operations such as pattern completion and association, not just raw signal amplification.

Energy efficiency and the race beyond data centers

All of these experiments share a common motivation, which is to cut the energy cost of intelligence. Training and running large AI models in data centers consumes significant power, and as applications spread into cars, medical devices, and consumer electronics, that burden only grows. By moving learning and inference into hardware that behaves like synapses, engineers hope to shrink both the energy budget and the physical footprint of AI, making it possible to embed sophisticated capabilities in places where a full server rack would never fit.

One discussion of this trend highlights a new synaptic transistor that is explicitly described as energy efficient and brain-like, capable of higher-level functions while operating at room temperature. The work is framed as Taking inspiration from the human brain, and it appears in a community focused on long-term human enhancement, where the energy-efficient brain-like transistor is seen as a step toward more integrated human machine systems. If such devices can deliver associative learning and pattern recognition at a fraction of the power cost of current chips, they could enable always-on sensing, adaptive implants, and autonomous robots that do not depend on cloud connectivity.

What intelligent materials could mean for everyday technology

For now, these intelligent materials and synaptic transistors live mostly in laboratories, but their potential applications are easy to imagine. A smartphone that uses a network of learning transistors could adapt its power use and interface to a user’s habits without constant software updates. A prosthetic hand wired to a neuromorphic chip might interpret nerve signals more fluidly, because the hardware itself is tuned to the statistics of the wearer’s movements. Even home appliances could gain a kind of quiet intuition, adjusting to patterns of use in ways that feel less like scripted automation and more like familiarity.

At the same time, the shift from rigid logic to adaptive matter raises new questions. When a material can change its behavior based on experience, debugging and verification become more complex, because there is no single fixed circuit to inspect. Safety critical systems, from medical implants to autonomous vehicles, will need ways to constrain and monitor learning at the hardware level. As researchers refine intelligent materials like the Dec, What, and Tiny molecular systems and scale up moiré-based synapses, the challenge will be to harness their brain-like flexibility without inheriting the brain’s unpredictability.

More from MorningOverview