A human infant is born with roughly twice as many synapses as it will eventually need. Over the first few years of life, the brain aggressively trims the weakest connections, keeping only those that prove useful. That biological editing process, known as synaptic pruning, is now inspiring a wave of AI research aimed at solving one of the field’s most stubborn practical problems: neural networks that are too large, too power-hungry, and too rigid to deploy where they are needed most.
The latest entry is TD-MCL, short for Temporal Development Mechanism for Continual Learning. Published in April 2026 in National Science Review by Oxford University Press, the peer-reviewed study introduces a framework for spiking neural networks (SNNs) that mimics how the developing brain both grows and prunes its own wiring. The results suggest that copying biology’s playbook can yield AI models that learn new tasks in sequence without forgetting old ones, all while staying compact enough to run on resource-constrained hardware.
How TD-MCL borrows from brain development
Spiking neural networks already operate more like biological brains than conventional deep learning models do. Instead of passing continuous values between layers, SNNs communicate through discrete electrical pulses that mimic the firing patterns of real neurons. TD-MCL takes that biological resemblance a step further by building in two developmental phases that mirror what happens in a young brain.
In the first phase, the network dynamically grows long-range excitatory connections between neurons as it encounters each new task, expanding its capacity to represent fresh information. In the second phase, feedback-guided local inhibition selectively weakens or removes connections that contribute little to performance. Think of it as the network asking itself, after each round of learning, “Which of these new wires actually helped?” and snipping the ones that did not.
This grow-then-prune cycle directly targets a long-standing problem in AI called catastrophic forgetting, where a model trained on a new task overwrites what it learned from previous ones. In benchmark experiments reported in the National Science Review paper, TD-MCL maintained accuracy across sequential task sets more reliably than baseline methods, while keeping the network noticeably sparse.
An earlier preprint on arXiv provides the first public technical disclosure of TD-MCL’s core ideas. Comparing the preprint with the published journal article lets researchers trace how the methodology and evaluation metrics were refined through peer review.
A broader movement, not a single lab
TD-MCL is not working in isolation. Several other research groups are pursuing brain-inspired pruning from different angles, and together they are building a case that developmental biology offers practical engineering lessons for AI.
One parallel effort, called DPAP (Developmental Plasticity-inspired Adaptive Pruning), applies a “use it or lose it” rule drawn directly from neuroscience. Detailed in a preprint on arXiv, DPAP monitors how frequently each connection fires during training and gradually eliminates the inactive ones. The approach reports significant compression ratios and, in some classification benchmarks, accuracy that matches or exceeds the original dense models. It works on both spiking and conventional neural networks, broadening its potential reach.
Other teams are tailoring pruning to specific, high-stakes applications. One group developed adaptive pruning methods for intracortical brain-computer interfaces, where an implanted chip must decode neural signals on an extremely tight power budget. By cutting neurons and synapses that contributed little to decoding accuracy, the researchers reported lower computational cost and energy consumption in simulated implant scenarios. For patients who depend on these devices, smaller and more efficient models could translate directly into longer battery life and less heat generated inside the skull.
A framework called SpikeNM takes a more hardware-conscious approach. Described in a preprint first posted in November 2025, SpikeNM uses semi-structured sparsity patterns that align with how modern chip accelerators actually execute matrix math. It also incorporates neuroscience-inspired techniques to decide which synapses to preserve, trying to balance biological plausibility with the practical realities of silicon.
Meanwhile, a peer-reviewed paper in Machine Learning (Springer) demonstrates that sparsity does not have to be imposed after training. Using Hebbian-style learning rules, named after the neuroscientist Donald Hebb, and evolutionary selection, the researchers trained networks that were highly sparse from the start yet competitive with their dense counterparts. Rather than building a large model and cutting it down, this method encourages only the most useful connections to form in the first place.
What has not been proven yet
For all the encouraging results, important gaps remain. The sources reviewed here do not confirm that TD-MCL’s code or datasets have been publicly released. Without open implementations, independent teams will have a harder time replicating the reported gains, and subtle implementation choices could affect whether the results hold up under different conditions.
Direct comparisons between frameworks are also missing. TD-MCL’s feedback-guided pruning and SpikeNM’s hardware-aligned sparsity have each been tested on their own benchmarks, with different datasets, architectures, and evaluation protocols. No published study has yet put them side by side under identical conditions, making it difficult to say which strategy offers the best trade-off between accuracy, compression, and real-world speed. Hybrid approaches that combine developmental feedback with hardware-friendly structure remain unexplored in the literature surveyed.
Commercial and clinical timelines are unclear across the board. None of the TD-MCL sources mention industry partnerships or product roadmaps. The brain-computer interface work stops at simulation; moving to real implanted devices will require robustness testing, safety validation, and co-design with neuromorphic chips, platforms specifically built to run spiking networks efficiently. None of those integration steps have been publicly documented.
There is also an open scientific question about which pruning signal works best. Activity-based criteria, like those used in a spiking-activity pruning framework, rely on how often neurons fire. TD-MCL instead emphasizes feedback tied to task performance. Both reduce model size and aim to preserve accuracy, but they may behave very differently on tasks with heavy noise, class imbalance, or complex temporal patterns. Systematic comparisons across diverse benchmarks and hardware backends have not yet appeared.
Sorting peer review from preprints
Readers evaluating these claims should note a key distinction in the evidence base. The TD-MCL study in National Science Review and the Hebbian sparse-training paper in Machine Learning have both passed formal peer review, meaning independent experts scrutinized their methods and conclusions before publication. That does not guarantee every detail is correct, but it does add a meaningful layer of vetting.
The DPAP, SpikeNM, and intracortical decoding studies, by contrast, are available only as arXiv preprints. Preprints are valuable for tracking cutting-edge ideas and examining methods in detail, but their experimental claims warrant more caution until they have been independently replicated or reviewed by outside experts. When reading any of these papers, it is worth checking whether the reported improvements are consistent across multiple datasets and whether the authors include ablation experiments that isolate which specific components of their method drive the gains.
A peer-reviewed study in Frontiers in Neuroscience helps put the algorithmic work in a hardware context. That paper explicitly links neuron pruning strategies with energy-efficient SNN processor design, framing pruning not just as a way to tidy up models after training but as a design principle that can be woven into chip architectures themselves.
Where biology meets silicon next
Taken together, the peer-reviewed evidence shows that brain-inspired developmental mechanisms and Hebbian learning rules can produce sparse networks that retain strong performance. Multiple preprint demonstrations reinforce the case that adaptive pruning can compress both spiking and conventional networks while maintaining or even improving accuracy. The direction of travel is clear: biology’s strategy of building generously and then editing ruthlessly has real engineering value.
What remains to be seen is whether these ideas can scale to the larger, messier tasks that dominate real-world AI deployment, how they interact with the neuromorphic hardware platforms designed to run them, and whether open-source implementations will enable the broad, independent replication that separates a promising concept from a reliable tool. For now, the infant brain’s ancient trick of pruning its way to efficiency is giving AI researchers a surprisingly productive blueprint.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.