Image Credit: No Swan So Fine – CC BY-SA 4.0/Wiki Commons

Tesla’s latest crash statistics paint a counterintuitive picture: even as its Autopilot system faces high profile failures and federal scrutiny, the raw numbers still suggest that human drivers are far more likely to crash than cars running advanced driver assistance. The gap is large enough that, on Tesla’s own figures, human error still overwhelms Autopilot faults by a shocking margin, even after you factor in some worrying trends in the data.

That tension, between impressive aggregate safety numbers and disturbing individual tragedies, now defines the debate over how quickly to trust automation on public roads. I see it as a story about two kinds of risk: the everyday, invisible danger of human fallibility and the rare but dramatic failures of software that is marketed as almost self driving.

What Tesla’s own numbers say about Autopilot vs humans

On paper, Tesla’s case for Autopilot looks formidable. In its Vehicle Safety Report for Q3 2025, the company says its driver assistance logged one crash roughly every 6.36 m million miles, a rate it frames as about nine times safer than human drivers covering the same distance. The company’s latest Safety Report for that quarter, described as a Vehicle Safety Report for the period, leans heavily on this comparison to argue that Autopilot is not just competitive with human performance but dramatically better. If you take those figures at face value, the implication is that for every crash that happens with Autopilot engaged, several more would have occurred if the same miles had been driven by humans alone.

Supporters of automation point to this ratio as evidence that the technology is already saving lives, even if it is imperfect. A separate analysis of Tesla’s disclosures notes that Q3 2024 showed Autopilot achieving one crash per more than 7 million miles, while Q3 2025 shows 6.36 m miles between crashes, still framed as nine times safer than human drivers despite the slight deterioration. That comparison, highlighted in a breakdown of Autopilot performance, underlines the core point: even as the system’s crash rate edges up, the baseline risk from human error remains much higher.

The troubling trend inside those “safer” statistics

Look closer, though, and the story becomes more complicated. Independent scrutiny of Tesla’s quarterly disclosures has flagged that the distance between Autopilot crashes has been shrinking over time, even as the company continues to claim that the system is getting better. One detailed review describes a Promising Headline and a Troubling Trend, noting that while Tesla insists Autopilot keeps improving, the underlying data show more crashes per million miles than in earlier periods, including a comparison to Q1 2024. That critique of Tesla argues that the company’s messaging glosses over a real deterioration in crash rates even as it touts Autopilot as a steadily advancing technology.

Another analysis of the same figures raises questions about how representative they are of everyday driving. It points out that the driver mix and fleet age have shifted, and that Tesla stopped reporting some of the data that would make apples to apples comparisons easier. That review of Autopilot safety data notes that earlier reports suggested a larger gap between Autopilot and human crash rates than the most recent numbers, which show the system’s advantage narrowing. In other words, Autopilot may still be safer than humans on average, but the margin is not as comfortable as Tesla’s marketing implies, and it appears to be moving in the wrong direction.

Federal scrutiny and the limits of “driver assistance”

Regulators have taken notice of that gap between promise and performance. The National Highway Traffic Safety Administration, the federal agency that oversees vehicle safety, has repeatedly examined Tesla’s driver assistance systems and their real world behavior. The agency’s own investigations, detailed on the NHTSA site, have focused on how Autopilot and related features interact with human drivers, especially in situations where the software expects the person behind the wheel to stay alert and ready to intervene. That structure, where the car handles routine tasks but the human is still legally responsible, is precisely where human error can creep back in, even when the technology is working as designed.

More recently, NHTSA has escalated its attention to Tesla’s more advanced driver assistance package, often branded as Full Self Driving. In a major step, the agency opened a new investigation into nearly 2.9 million vehicles equipped with that software, citing behaviors that appear to occur most frequently at intersections. The notice explains that While the concerning maneuvers are most common in those settings, the probe will encompass a broader range of driving scenarios and has been consolidated with the Leon case that previously examined similar issues. That expanded review of While the system’s behavior underscores a key point: even if Autopilot reduces average crash rates, regulators are not willing to ignore patterns of failure that could have been anticipated and designed out.

When Autopilot fails, juries and critics push back

The courtroom consequences of those failures are starting to catch up with Tesla. In one landmark case in MIAMI, a jury ordered the company to pay more than $240 m in damages after an Autopilot crash that killed a driver, a verdict that translated into more than $240 million in total liability. The panel concluded that Elon Musk’s car company Tesla bore responsibility for the way its driver assistance system behaved in the moments leading up to the collision, and for how it was marketed to the public. That decision, described in detail in coverage of the $240 million award, signals that juries are increasingly willing to treat Autopilot failures not as unavoidable accidents but as the result of design and communication choices made by Tesla.

Critics outside the courtroom have been just as blunt. A museum exhibit on technology missteps singles out Tesla’s driver assistance system as a cautionary tale, citing an investigation by the National Highway Traffic Safety Administration that concluded the system’s weak driver engagement led to foreseeable driver misuse and avoidable crashes. That assessment, highlighted in the National Highway Traffic themed display, argues that Tesla did not do enough to ensure that drivers understood Autopilot’s limits or stayed sufficiently attentive while it was active. In this view, the problem is not only that humans make mistakes, but that the system’s design and branding invite exactly the kind of overconfidence that turns those mistakes into tragedies.

The psychology of “almost self driving” and the real risk balance

Beyond the numbers and investigations, there is a psychological dimension that helps explain why Autopilot crashes loom so large in the public imagination. A widely shared video essay titled Autopilot vs Humanpilot: A Dark Reality imagines the news reports and interviews that would follow if a fully autonomous system killed people at the same rate humans currently do, and concludes that the outrage would be overwhelming. The narrator argues that society holds machines to a higher standard than people, even when the human baseline is far deadlier, and that this double standard shapes how we react to every high profile Autopilot failure. That argument, laid out in the Jan commentary, captures the emotional gap between statistical safety and visceral fear.

At the same time, Tesla’s own messaging has sometimes blurred the line between assistance and autonomy in ways that feed that tension. Promotional material and software names like Full Self Driving can leave drivers with the impression that the car is more capable than it really is, even as the fine print insists that they must stay fully engaged. A detailed breakdown of Tesla’s Q3 2025 Vehicle Safe metrics, framed under the banner Tesla Drops Stunning Autopilot Safety Data, notes that Tesla Says Autopilot Is Safer By a Lot, and uses that claim to argue that the company is winning the safety race. Yet the same analysis of Tesla Drops Stunning also acknowledges that the company’s own Vehicle Safe figures are not immune to scrutiny, especially when they are presented without context about changing driver behavior or the conditions under which Autopilot is typically used.

More from Morning Overview