Image by Freepik

A US robotics company has quietly assembled what amounts to a colossal digital forest, training an artificial intelligence system on 150 m plant images to recognize crops and weeds with uncanny precision. Instead of planting trees, this forest lives in silicon, powering laser-equipped machines that roam real fields and make split-second decisions about what should live or die. It is a glimpse of how agriculture is being rebuilt around data, optics and code, one pixel at a time.

At the center of this shift is Carbon Robotics, which has turned years of field work into a foundational model for plants that now guides its LaserWeeder machines across multiple continents. The company’s bet is simple but radical: if a robot can see every plant, it can manage fields plant by plant, cutting chemicals, labor and waste while boosting yields. The scale of that digital plant library, and the money and engineering behind it, suggest this is not a niche experiment but an early template for how AI will manage living systems.

The making of a 150 m plant brain

The core of Carbon Robotics’ approach is an AI model that can detect and identify individual plants in real time as a robot rolls through a field. The company built that capability into its LaserWeeder platform so farmers can specify which species to protect and which to eliminate, effectively turning agronomy rules into machine-readable instructions. According to detailed technical descriptions, Carbon Robotics built this model so its machines can recognize the plant in front of them, then act on that classification with a laser pulse instead of a spray nozzle.

Earlier this year the company formalized that capability into what it calls a Large Plant Model, or LPM, a foundational AI “brain” trained on 150 m plant images that now underpins every LaserWeeder unit. Corporate materials describe the launch under the banner Carbon Robotics Launches‘s First Ever Large Plant Model, emphasizing that the dataset is not just big but also diverse and fast growing. That scale matters because each additional plant image, captured under different lighting, soil and weather conditions, makes the model more robust when it encounters a messy real-world field instead of a controlled test plot.

From LaserWeeder hardware to adaptive Plant Profiles

What turns that digital forest into a practical farm tool is the LaserWeeder hardware, a tractor-scale robot that uses high-powered lasers to kill weeds at the meristem while leaving crops intact. As the global fleet of these machines operates in fields across multiple regions, it continuously feeds new images and edge cases back into the training pipeline, which is how As Carbon Robotics describes its dataset as the largest, most diverse and fastest growing agricultural image collection used for plant detection and identification. The robot is not just a tool in the field, it is also a sensor platform that keeps expanding the model’s understanding of how plants look at different growth stages and in different cropping systems.

To make that sophistication usable by working growers, the company has layered a feature called Plant Profiles on top of the LPM. Plant Profiles allow farmers to quickly tailor the foundational model to their own crops, varieties and local weed pressures, adjusting behavior in real time without retraining the core network. Company statements describe Plant Profiles as a way to “use plant profiles in the field to tailor model behavior in real-time,” which is crucial when a grower might be switching between, say, carrots and onions on adjacent beds.

Global reach, funding firepower and the Feb inflection point

Carbon Robotics has not built this system in isolation from the broader farm economy. Its LaserWeeder robots are already running in 14 countries, handling a variety of crops and weather conditions that stress-test the model and expose it to new plant morphologies. Company leaders have described how Carbon Robotics used that global deployment to gather the data needed to build the LPM, with each new geography adding weeds and crops that might never appear in a single region. That worldwide footprint also signals that this is not a pilot project but a commercial system already embedded in mainstream vegetable and specialty crop production.

Behind the technology sits a significant pool of capital. An investment round described as AI-Powered Carbon Robotics Secures $70 Million in High-Profile Series D Round Making a statement in the robotics and AI revolution brought in $70 M, with the same documentation noting that the company raised $70 Million in a Series D investment round. That funding, detailed under the heading Powered Carbon Robotics $70 Million, gives the firm room to scale manufacturing, expand its data collection and refine the LPM without relying solely on near-term cash flow from machine sales. The timing is notable, coming as the company publicly framed its Large Plant Model launch in Feb, signaling a strategic inflection point where the AI layer becomes as central to its identity as the lasers and steel.

Inside the neural net, from Mikesell to field decisions

While the company’s marketing leans on big numbers, the underlying engineering choices are what determine whether the LPM can handle the chaos of real farms. Chief executive Paul Mikesell has explained that the firm, founded in 2018, began developing the model shortly after it started shipping its first LaserWeeders, using every pass through a field as a chance to collect more labeled data. In his description, Mikesell emphasized the importance of controlling the full pipeline of images going into the neural net, from camera hardware to labeling workflows, so the model learns from consistent, high quality examples instead of noisy, mismatched datasets.

That control lets the company tune the model for the split-second decisions a LaserWeeder must make as it rolls at field speeds, deciding whether each plant is a crop or a weed and firing a laser accordingly. The LPM is not a static artifact but a living system that is updated as new data flows in from the global fleet, with each software release effectively upgrading the “eyes” and “brain” of every deployed robot. Internal communications framed this evolution under the banner Carbon Robotics Launches‘s First Ever Large Plant Model, underscoring that the Feb milestone was less a single product drop and more the formalization of a continuous learning loop between field and cloud.

Digital forests, real fields and the wider automation race

Carbon Robotics is not the only group trying to turn ecological complexity into machine-readable data, but its focus on weeds and crops contrasts with other efforts that target reforestation. In Canada, for example, a startup called Canadian firm Flash Forest has used drones and aerial mapping to plant trees at scale, with public goals of putting 1 billion trees in the ground by 2028. Where Flash Forest uses automation to rebuild physical forests, Carbon Robotics has effectively built a virtual forest of plant images that lets its machines manage existing farmland more precisely, reducing the need for herbicides and manual hoeing.

Both approaches reflect a broader shift in how land is managed, with AI models and robotics turning what used to be coarse, field-level decisions into fine-grained, plant-level actions. In Carbon’s case, the combination of the LPM, Plant Profiles and a global LaserWeeder fleet described as Carbon Robotics, a worldwide leader, suggests that the company sees itself less as a machine builder and more as a platform for plant-level intelligence. Whether that digital forest ultimately helps farmers cut costs, reduce chemical use and adapt to climate volatility will depend on how quickly the model can keep learning from the messy, unpredictable reality of fields, and how accessible the technology becomes beyond early adopters with the capital to buy in.

More from Morning Overview