Tesla’s push to launch an unsupervised robotaxi service faces a fundamental credibility problem: there is no publicly available, miles-normalized crash data to prove its vehicles are safer than human drivers. While a rival company has published a detailed benchmarking study covering tens of millions of autonomous miles, Tesla has offered no equivalent analysis, even as federal regulators and state authorities flag serious gaps in how the company represents its technology. The absence of that data matters now more than ever, because Tesla is actively seeking approval to put driverless vehicles on public roads.
The Benchmarking Gap Tesla Has Not Closed
Comparing robotaxi crash rates to human driving rates sounds straightforward, but the methodology is anything but simple. A recent arXiv preprint authored by researchers at Waymo lays out what a rigorous comparison actually requires: matching crashes by type, aligning data by road conditions, location, and vehicle class, and drawing human baselines from federal datasets. That Waymo benchmarking study covers 56.7 million rider-only miles and uses crash extractions from the National Highway Traffic Safety Administration’s Standing General Order reporting system. The result is a transparent, reproducible framework that lets outside analysts check the math. Tesla has published nothing comparable for its Full Self-Driving system or its planned robotaxi fleet.
This is not a minor omission. Without a structured benchmarking methodology, any claim that Tesla’s autonomous technology matches or beats human safety performance is unverifiable. The Waymo paper shows that even a company with tens of millions of autonomous miles can find its crash rates exceeding human averages in certain urban crash categories when the data is properly segmented. If a company that has done the work still shows mixed results, the silence from a company that has not done the work should concern regulators and the public alike. It also means that local transportation agencies, insurers, and safety researchers lack a common evidentiary foundation for evaluating whether Tesla’s robotaxi proposal meets the “as safe as a human” threshold that the company frequently invokes in public statements.
Federal Data That Cannot Answer the Safety Question
The federal government does collect crash data on automated driving systems, but the available numbers do not support the kind of apples-to-apples safety comparison that Tesla’s marketing implies. NHTSA’s Standing General Order on crash reporting requires companies operating advanced driver-assistance systems and automated driving systems to file incident reports. Those reports are available as public CSV downloads, but NHTSA itself warns that the raw incident counts are not normalized by miles driven or by the number of vehicles on the road. That caveat makes it impossible to use the public data alone to calculate a per-mile crash rate for any company, Tesla included, because no exposure data (how much and where each system is actually driven) is provided alongside the crash records.
On the human side of the comparison, NHTSA maintains the Crash Report Sampling System, a nationally representative sample of police-reported crashes. According to NHTSA’s CRSS materials, that system reflects roughly 6 to 7 million police-reported crashes annually across the United States. CRSS provides the baseline that researchers use when they talk about “average human crash rates,” but translating that baseline into a fair comparison with robotaxis requires careful alignment by crash type, geography, and driving conditions. Simply dividing total crashes by total miles driven produces a misleading number, because human crashes cluster in specific scenarios that may or may not overlap with where robotaxis operate. The Waymo preprint addresses this by building crash typologies and matching on road type and location. Tesla has not disclosed whether it performs any similar internal analysis, leaving a gap between its safety assurances and the kind of peer-reviewable evidence that regulators increasingly expect.
California’s Regulatory Warning on Tesla’s Claims
Tesla’s safety messaging has already drawn formal regulatory action. The California Department of Motor Vehicles found that Tesla violated state law by marketing its driver-assistance features under the names “Autopilot” and “Full Self-Driving Capability” in ways that misled consumers about what the technology could actually do. That California DMV action adopted a proposed ruling from an administrative law judge and identified specific case numbers tied to the enforcement effort. The finding did not address crash rates directly, but it established an official state-level determination that Tesla’s language overstated the capabilities of systems that still require a human driver to remain attentive and in control. In effect, California concluded that branding and marketing had run ahead of the underlying technical performance.
That regulatory context is directly relevant to the robotaxi question. If Tesla’s marketing of a supervised driver-assistance system was found misleading, the bar for proving an unsupervised robotaxi is safe enough for public roads should be even higher. A company moving from Level 2 driver assistance, where the human is still responsible, to a fully driverless service needs to show a step change in safety evidence, not just a rebranding of the same technology under a new product name. The California finding suggests regulators are paying attention to that distinction. It also signals that agencies are willing to scrutinize not just how often crashes occur, but how companies communicate risk and limitations to the public, a factor that could influence how any future Tesla robotaxi permits are conditioned or constrained.
Why Camera-Only Systems Face Extra Scrutiny
Tesla’s approach to autonomous driving relies exclusively on cameras and neural networks, having removed radar and ultrasonic sensors from its vehicles in recent years. Most other companies pursuing robotaxi operations use a combination of cameras, radar, and lidar to create redundant layers of perception. The camera-only approach is cheaper to manufacture and scale, but it raises specific questions about performance in conditions where visual data degrades: heavy rain, direct sun glare, fog, and poorly lit streets at night. These are exactly the scenarios where human crash rates tend to spike, and where a fair benchmarking study would need to show that the autonomous system performs at least as well. Without redundancy, any systematic weakness in vision-based perception could translate directly into elevated crash risk in those edge-case environments.
No published research from Tesla or independent researchers has demonstrated that its camera-based system matches or beats human performance in those degraded-visibility conditions using the kind of controlled, miles-normalized methodology that the Waymo preprint describes. That gap is not just academic. Cities considering whether to permit Tesla robotaxis will need to evaluate whether the vehicles can handle the full range of driving conditions their residents face, not just the sunny, well-marked highway miles where camera systems perform best. Insurance underwriters, too, will look for condition-specific crash data before pricing coverage for fleets that might operate around the clock. Until Tesla produces or allows independent verification of crash-rate data segmented by condition type, time of day, and road context, the safety case for its robotaxi remains incomplete and largely a matter of corporate assertion rather than demonstrable fact.
What Riders and Regulators Still Need
The core tension is simple: Tesla is asking the public and regulators to trust that its robotaxi will be safe, but it has not provided the data that would let anyone verify that claim. The tools for building that evidence already exist in the form of the federal crash datasets that underpin human baselines and the structured benchmarking framework demonstrated by Waymo’s rider-only analysis. What is missing is Tesla’s willingness to subject its own system to the same level of scrutiny, publish normalized crash rates by category, and explain where its technology still falls short of human drivers. Without that, regulators are effectively being asked to approve driverless operations on the strength of opaque internal metrics and marketing narratives.
For riders, the stakes are personal rather than abstract. A robotaxi that is marginally safer on average but substantially worse in certain scenarios (nighttime left turns across traffic, for example) may not meet the safety expectations of people who have no control over how the system is designed or updated. For regulators, the stakes are institutional. Granting permission too early could undermine public trust in automated vehicles as a whole if high-profile failures occur, while waiting for robust, miles-normalized evidence could slow the rollout of a technology that may eventually reduce road deaths. Bridging that gap will require Tesla to do what it has so far declined to do. Open its safety record to independent, methodologically rigorous evaluation, and accept that credible robotaxi deployment is as much a data transparency challenge as it is a software engineering feat.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.