Morning Overview

AI evolved robot designs in simulation, then scientists built them outdoors

A team at Northwestern University used artificial intelligence to evolve robot body plans inside a physics simulation, then physically assembled the top-performing designs and turned them loose on gravel, grass, and tree roots. The work, published in the Proceedings of the National Academy of Sciences in March 2026, represents a significant step beyond earlier lab-bench demonstrations, where AI-designed robots operated only on flat tabletops. By forcing evolved machines to cope with unpredictable outdoor terrain, the researchers are testing whether digital evolution can produce hardware tough enough for the real world.

Modular Machines Built Like Living Legos

The robots in this study are not single-purpose devices designed by human engineers. They are assembled from autonomous modules, each containing its own motor, battery, and onboard computer. Think of them as motorized building blocks that snap together. No individual block can walk on its own, but when several are combined in the right arrangement, coordinated locomotion emerges from their interactions. The research team calls these creations “legged metamachines,” a label that captures the core idea that intelligence emerges when the modules combine and collectively coordinate their movements.

This modular approach carries a practical advantage that most conventional robots lack. Because each unit is self-contained, a damaged leg can be swapped out or the entire body plan can be reconfigured without redesigning the electronics or control software from scratch. That flexibility is what makes evolutionary search in simulation so appealing: the algorithm can explore thousands of module arrangements quickly, discarding failures and promoting designs that move well. In principle, the same inventory of parts could be rearranged into walkers, crawlers, or even climbing machines, depending on what the algorithm discovers.

How Simulation Drove the Design Process

Rather than hand-tuning leg counts and joint angles, the researchers let an optimization algorithm search a compressed design space of possible modular configurations. The process, described in the peer-reviewed study, works by encoding body plans in a compact mathematical representation and then scoring each candidate on simulated locomotion performance. Over many generations of selection, the algorithm converges on morphologies that balance speed, stability, and energy use in virtual terrain, while also respecting constraints imposed by the physical modules.

The winning designs were not what a human engineer would sketch. The team selected the best three-legged, four-legged, and five-legged configurations for physical assembly. Odd leg counts and asymmetric layouts appeared because the algorithm optimized purely for function, unconstrained by aesthetic preferences or textbook conventions. That willingness to explore strange body plans is precisely what sets evolutionary design apart from traditional engineering, where designers tend to default to bilateral symmetry and familiar templates such as quadrupeds or hexapods.

Behind the scenes, the system relies on a physics simulator to approximate how different arrangements of modules will move. Each candidate robot is dropped into a virtual environment and commanded to walk forward; those that stumble or waste energy are discarded, while those that travel farther or more efficiently are selected as “parents” for the next generation. According to the authors, this pipeline can evaluate and refine thousands of designs far faster than any hardware-based trial-and-error process could manage.

From Flat Tables to Rough Ground

Previous work from the same lab had already shown that AI could design working robots in seconds on a consumer-grade GPU. A 2023 study, also published in PNAS, demonstrated rapid structural optimization that produced functional walking machines almost instantly. One of those earlier designs had three legs and rear fins, a shape no engineer would have proposed. At the time, Sam Kriegman described the result bluntly: “When people look at this robot, they might see a useless gadget. I see the birth of a brand-new organism.”

But those earlier machines walked on smooth tabletops. The gap between a controlled lab surface and an actual field of gravel or a patch of tree roots is enormous. Uneven ground introduces forces that simulations can only approximate, and small modeling errors compound with every step. The 2026 study directly addresses that gap. Kriegman and his team tested the assembled robots outdoors across gravel, grass, and terrain cluttered with tree roots, environments where footing is never guaranteed and contact forces vary unpredictably from step to step.

In videos released with the work, the robots move with an awkward but effective gait, clambering over small obstacles and recovering from slips that would topple a more rigidly programmed machine. The modular legs flex and reposition as the body pitches, suggesting that the evolved designs have at least some built-in tolerance for perturbations. That behavior is not the result of a human programmer anticipating every possible bump; it emerges from the evolutionary process that rewarded designs able to maintain forward progress despite noisy dynamics.

Surviving Damage Without a Reboot

One of the more striking results is that these robots survive severe damage and keep moving. Because each module operates semi-independently, losing a leg does not crash the entire control system. The remaining modules redistribute their effort and continue locomotion, albeit with altered gait patterns. This is a sharp contrast to most commercial robots, where a single broken actuator can render the whole platform useless until a technician intervenes to repair or reset the device.

A related preprint by Chen Yu and Sam Kriegman pushes this idea further. That work explores controllers learned in simulation that adapt when the robot’s own morphology changes through what the authors call “kinematic self-destruction.” In other words, the robot can deliberately shed a damaged limb and its control policy adjusts on the fly. The preprint suggests a future where robots do not just tolerate damage but actively reconfigure around it, treating structural failure as another variable to adapt to rather than a terminal event.

This line of research fits into a broader institutional push at Northwestern to explore resilient, adaptive technologies. Communications from university media teams emphasize applications such as search-and-rescue operations, planetary exploration, and hazardous industrial inspection, settings where human access is difficult and unpredictable terrain is the norm. In those scenarios, a robot that can limp home on three legs after losing a fourth is far more valuable than one that collapses at the first sign of trouble.

What the Sim-to-Real Gap Still Hides

Coverage of this work has largely echoed the institutional framing, presenting the outdoor tests as proof that evolved robots are ready for deployment. That framing deserves some skepticism. The published results demonstrate locomotion across a few types of natural terrain, but the sources do not include specific performance metrics such as sustained speed, energy consumption per meter, or payload capacity. Without those numbers, it is difficult to compare these evolved metamachines to conventional legged robots or to assess whether they are truly practical for field use rather than proof-of-concept demonstrations.

Another open question is how robust the designs are to environmental variation beyond the test sites. Gravel and grass represent a meaningful step up from tabletops, but they are still relatively benign compared with deep mud, loose sand, or steep rocky slopes. The physics simulator can be extended to approximate those conditions, yet each new domain introduces uncertainties that may widen the sim-to-real gap. If evolved robots must be re-optimized for every new terrain type, the promise of general-purpose adaptability becomes harder to realize.

There are also engineering trade-offs lurking beneath the modular architecture. Embedding a motor, battery, and processor in every block simplifies reconfiguration and damage tolerance, but it adds weight and complexity compared with centralized designs. The current reports do not detail how long the robots can operate on a charge, how easily modules can be mass-manufactured, or how the system scales as the number of blocks grows. In large swarms or very big bodies, coordination overhead and communication latency could become significant bottlenecks.

Still, the broader arc is clear. By evolving body plans in simulation and validating them in the field, the Northwestern team is chipping away at one of robotics’ central challenges: designing machines that can handle the messiness of the real world without exhaustive hand-engineering. Even if today’s legged metamachines are more experimental than deployable, they hint at a future in which robot morphology is not fixed on the drafting table but discovered through iterative search, much like biological evolution discovered legs, fins, and wings.

Whether that future arrives will depend on how quickly researchers can close the remaining performance and reliability gaps. Better simulators, richer fitness functions, and more sophisticated controllers will all play a role. So will careful, quantitative field trials that move beyond evocative videos toward hard numbers. For now, though, the sight of AI-evolved robots scrambling over tree roots, awkward, resilient, and undeniably alive in their own mechanical way, marks a notable moment in the ongoing effort to make machines that can survive, and even thrive, off the lab bench.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.