Morning Overview

Tesla’s own math exposes robotaxis as 4x more dangerous than humans

Tesla’s Full Self-Driving system crashes at roughly four times the rate of human drivers when the company’s own reported data is measured against federal benchmarks using comparable thresholds. That finding emerges not from outside critics but from the collision records Tesla is required to file with the National Highway Traffic Safety Administration, cross-referenced with the government’s standard measure of how often Americans crash. The gap lands at a particularly uncomfortable moment. Federal regulators are actively investigating FSD for alleged red-light running and wrong-way driving incidents.

How Federal Crash Reporting Shapes the Numbers

The data trail starts with a single regulatory instrument. NHTSA’s Standing General Order, described in detail on the agency’s crash reporting page, requires manufacturers and operators of both Automated Driving Systems and Level 2 advanced driver-assistance systems to log crashes that meet specific severity thresholds. For ADS vehicles, any collision involving injury, fatality, or airbag deployment triggers a mandatory report. For Level 2 ADAS, which includes Tesla’s FSD in its current supervised form, similar thresholds apply. The order’s amended instructions spell out the exact reporting fields, timing requirements, and procedures for updates, creating the formal paper trail that makes rate comparisons possible in the first place.

A critical wrinkle limits what the public can actually see. Under the Standing General Order, manufacturers can request that NHTSA redact crash narrative text, software feature versions, and operational design domain details by claiming Confidential Business Information protection. That means outside analysts working with the published data are often missing the granular context needed to determine whether a given crash stemmed from a software failure, driver error during a handoff, or road conditions the system was never designed to handle. The numbers exist, but the story behind each number is frequently blacked out, leaving regulators with a fuller picture than researchers and journalists trying to vet Tesla’s safety claims.

The Baseline: How Often Human Drivers Actually Crash

Any comparison between autonomous or semi-autonomous systems and human drivers depends on a credible human baseline. The federal government’s primary yardstick is a comprehensive statistical overview in the Traffic Safety Facts compendium (DOT HS 813 762), published by NHTSA’s National Center for Statistics and Analysis. That report estimates the total number of police-reported crashes across the United States, spanning everything from minor fender-benders to fatal collisions. It draws on the Crash Report Sampling System, a nationally representative probability sample that covers property-damage-only incidents through fatal crashes, and is part of a broader portfolio of safety statistics coordinated across the U.S. transportation department.

The distinction between “all police-reported crashes” and “crashes meeting SGO thresholds” is where Tesla’s safety claims start to fracture. Tesla has historically publicized a miles-per-crash figure derived from its fleet telemetry, comparing it against the full universe of police-reported collisions. But the SGO only captures the more serious subset, crashes involving injury, death, or airbag deployment. When Tesla’s reported incidents are measured against a human baseline filtered to the same severity level, the apparent safety advantage disappears and reverses. The company’s per-mile crash rate, adjusted for comparable thresholds, runs roughly four times higher than the equivalent human rate derived from federal crash sampling data, undercutting the narrative that FSD is already safer than a typical driver.

Waymo Shows What Honest Benchmarking Looks Like

A preprint study by Waymo researchers offers a sharp contrast in methodology. Their analysis of 56.7 million rider-only miles aligns human benchmarks to the same vehicle types, road types, and geographic locations where Waymo’s robotaxis actually operate, rather than relying on a broad national average. The study uses SGO-reported crashes as its data source and reports statistical testing with confidence intervals, giving outside reviewers a clear way to evaluate the strength of the findings. That level of methodological transparency, matching the comparison population to the actual operating conditions, is exactly what Tesla’s published safety statistics lack and what federal analysts are accustomed to seeing in the transportation statistics system.

The difference matters because not all miles are equal. A car driving on a straight, dry interstate in light traffic faces a fraction of the crash risk that an urban vehicle encounters navigating intersections, cyclists, and pedestrians. Tesla’s FSD operates across a wide range of conditions, but its aggregate safety figures lump highway cruising together with dense city driving and do not disclose enough detail to let outsiders separate those modes. Waymo’s approach of controlling for road type and geography produces a far more honest comparison. Without that kind of matching, any claim that a system is “safer than humans” is comparing apples to traffic cones, especially when the underlying human benchmark is drawn from carefully curated datasets like those cataloged by the National Transportation Library.

Federal Investigators Are Already Watching

The statistical gap between Tesla’s marketing and its regulatory filings takes on added weight given active federal scrutiny. The Washington Post has reported that federal officials are probing Tesla’s FSD over traffic violations, with NHTSA scrutinizing the system’s behavior for alleged red-light running and wrong-way travel. The investigation coverage includes incident counts and injury reports, suggesting regulators have identified patterns rather than isolated events. Those patterns are likely being evaluated against the same SGO crash reports that underpin the fourfold crash-rate comparison, meaning investigators are looking at the same data that erodes Tesla’s safety claims.

That investigation sits against a backdrop of Tesla’s robotaxi ambitions. The company has positioned fully autonomous ride-hailing as a central part of its future revenue story, promising fleets of driverless vehicles generating income while owners sleep. But if the system’s own crash data, filed under legal obligation, shows it performing worse than human drivers at comparable severity levels, the regulatory path to deploying unsupervised robotaxis becomes significantly steeper. NHTSA has the authority to demand recalls, impose restrictions, or block deployment entirely if it determines a system poses an unreasonable safety risk, and it will be hard for Tesla to argue that FSD is ready for full autonomy while its supervised performance trails the human baseline.

What the Redacted Data Hides

The most uncomfortable question in this analysis is how much worse the picture might look with full transparency. Because manufacturers can shield crash narratives and software version details behind Confidential Business Information claims, the public record is structurally incomplete. Researchers working with transportation statistics compiled by the Bureau of Transportation Statistics and other federal sources can align severity thresholds and mileage estimates, but they cannot reliably separate software-induced failures from human misuse inside the Tesla crash pool. That limitation cuts both ways: it prevents critics from overstating FSD’s culpability in mixed-control crashes, but it also blocks independent verification of Tesla’s most optimistic claims.

For now, the best available evidence comes from the intersection of three federal data streams, the SGO crash reports that capture serious ADAS incidents, the national crash sampling systems that define the human baseline, and the broader statistical infrastructure maintained across the transportation policy apparatus. Measured on those common terms, Tesla’s supervised Full Self-Driving does not outperform human drivers; it lags them by a wide margin. Until Tesla opens its own telemetry and crash narratives to outside scrutiny, that gap will define the debate over whether FSD is an experimental beta feature or a truly safer way to move people on public roads.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.