Image Credit: JirkaBulrush - CC BY-SA 4.0/Wiki Commons

Regulators are intensifying their scrutiny of driverless technology, and the emerging picture is more complicated than the industry’s glossy promises. Instead of a straight line toward safer roads, the data now suggest a patchwork of impressive gains, troubling blind spots, and unanswered questions about how these systems behave in the real world. I see a technology that can outperform humans in some situations yet still fails in ways that are difficult to predict, explain, or regulate.

As federal investigators dig into high-profile crashes and software flaws, they are testing the core claim that automated systems will dramatically cut deaths and injuries. The early verdict is mixed: driverless cars may reduce some kinds of collisions, but they also introduce new risks that current rules, and current drivers, are not fully prepared to manage.

Regulators move from hype to hard questions

The most striking shift in the driverless story is how quickly regulators have moved from cheerleading to cross-examination. Federal auto safety officials are no longer treating automation as an unqualified public good, but as a complex technology that must prove it can handle messy, human streets. That pivot is visible in the way the National Highway Traffic Safety Administration and the National Highway Traffic Safety Administration’s sister agencies are now probing not just individual crashes but the underlying design choices that shape how these systems behave.

Earlier optimism inside government that automation could eventually deliver “zero traffic fatalities” has been tempered by research that calls that outcome “unlikely,” including a RAND analysis that warned against assuming perfect performance from autonomous vehicles. Policy experts now emphasize that while the National Highway Traffic Safety Administration and other regulators still see potential to reduce road injuries, they are also grappling with new safety and policy challenges that come with handing more control to software. A recent review of the evolving safety and policy landscape around self-driving cars, for example, underscores how the latest report from the National Highway Traffic authorities has shifted from pure enthusiasm to a more cautious, data-driven stance.

Tesla’s sprawling probes show the stakes

No company illustrates the new scrutiny better than Tesla Inc, whose aggressive rollout of its Full Self-Driving software has become a test case for how regulators respond when marketing outpaces safety evidence. Federal auto safety regulators have opened yet another investigation into Tesla Inc and its Full Self-Driving system, pressing the company on whether over-the-air updates can truly fix safety defects that appear on public roads. The ultimate question regulators are asking is blunt: if the system is supposed to handle the driving task, why is it still failing in ways that put people at risk.

One new federal investigation now covers 2.9 m vehicles, essentially all Teslas equipped with the company’s advanced driver-assistance features, reflecting concern that the problems are systemic rather than isolated. Another probe is focused on Tesla’s app-based feature that lets owners move their cars remotely, with federal investigators examining crashes in which the Tesla appeared to drive itself into obstacles while the driver stood outside. In a separate line of inquiry, the National Highway Safety Administration is reviewing at least 58 incidents tied to Tesla’s Self Driving Technology Under Probe After crashes and near misses, including one in which a pedestrian was killed, underscoring how software decisions can have life-or-death consequences.

Red lights, reporting gaps, and the limits of “Full Self-Driving”

As the probes deepen, regulators are zeroing in on specific failure modes that cut against the idea that automation is inherently safer. One area of concern is how Tesla’s Full Self-Driving handles basic traffic controls like red lights and stop signs, which human drivers are expected to obey without fail. When a system that markets itself as “Full Self-Driving” appears to run red lights or misjudge intersections, it raises questions about whether the technology is being deployed before it can reliably handle the fundamentals of urban driving.

Media, vehicle owner, and other incident reports to the National Highway Traffic Safety Administration have documented at least 44 cases in which Tesla’s Full Self-Driving allegedly failed to stop for traffic signals or otherwise behaved unpredictably at intersections. Federal investigators are also examining whether Teslas are reporting crashes promptly as required, a basic obligation that becomes more important as vehicles take on more of the driving task. In parallel, a separate review is asking whether the software at the heart of these systems actually works as intended, a question that has become central enough that California lawmakers have moved to hold driverless car companies accountable for traffic violations, with a new law in California poised to treat the automated system, not just the human, as responsible when a car breaks the rules.

Waymo’s school bus problem shows automation’s blind spots

Tesla is not the only company learning that real-world driving can expose blind spots that lab testing misses. Waymo, which has long pitched its robotaxis as a more cautious alternative to human drivers, is now facing its own federal scrutiny after its vehicles were observed passing stopped school buses. For a technology that is supposed to excel at following rules and protecting vulnerable road users, failing to recognize or properly respond to a school bus with its stop sign deployed is a glaring red flag.

The National Highway Traffic Safety Administration, often referred to as NHTSA, opened an investigation after a media report showed Waymo vehicles passing stopped school buses in Austin and other cities while the crossing control arm was deployed. In response, Waymo is preparing a software recall to change how its cars interpret and react to school bus signals. Reporting by Robert Hart notes that Posts from local residents and video evidence suggest the problem appears to have persisted over time, raising questions about how quickly companies detect and correct dangerous behavior in their fleets. For parents watching a driverless car roll past a bus full of children, the promise of safer streets can feel very far away.

Cruise’s credibility crisis and the cost of bad data

Even when the technology itself works as designed, the safety case for driverless cars depends heavily on accurate reporting of what happens when it does not. That is why the recent criminal case involving Cruise LLC has rattled confidence in the industry’s transparency. If companies are not fully candid with regulators about crashes and near misses, it becomes nearly impossible to judge whether their systems are truly safer than human drivers.

In SAN FRANCISCO, federal prosecutors say Cruise LLC, an autonomous vehicle company based in San Francisco, admitted to submitting a false report to influence a federal investigation into a crash involving one of Cruise’s autonomous vehicles. The company has agreed to resolve a criminal charge and pay penalties, a rare step that underscores how seriously authorities now treat misrepresentations about automated driving performance. For regulators trying to build a statistical picture of how safe these systems really are, distorted or incomplete data from a major player like Cruise undermines the entire enterprise.

Are robotaxis safer, or just differently dangerous?

Despite the mounting probes, there is evidence that driverless cars can outperform humans in some conditions, which is part of what makes the safety debate so fraught. In controlled environments and well-mapped urban cores, automated systems can avoid common human errors like distraction, fatigue, and drunk driving. That has led some researchers and companies to argue that, on balance, robotaxis already look safer than the average human behind the wheel.

One analysis of crash data from Cruise’s early operations found that its vehicles were involved in fewer collisions per mile than a typical human motorist, suggesting that Cruise may already be safer in routine driving. A separate large-scale accident study concluded that driverless cars are mostly safer than humans in standard conditions, but worse when performing complex maneuvers like turns, with One of the key findings being that automated systems struggle more at intersections and in poor conditions. Taken together, these results point to a nuanced reality: robotaxis may reduce some categories of crashes while introducing new, concentrated risks in specific scenarios that humans handle better.

Why “one-third fewer crashes” is not the revolution we were sold

From the earliest days of the self-driving push, advocates have promised a near-eradication of road deaths once computers take over. The emerging research paints a more modest picture. Instead of eliminating most crashes, current designs appear likely to prevent only a fraction, especially if they are tuned to drive like people rather than like ultra-cautious machines. That gap between marketing and math is at the heart of the current backlash.

According to a study cited by safety advocates, self-driving cars will only prevent about one-third of all vehicle crashes if they drive like people, a finding that has been used to argue that Self driving alone will not solve road carnage. A similar conclusion appears in a review of research by the Insurance Institute for Highway Safety, which found that driverless cars would only prevent a third of accidents, and that many crashes result from decisions about speed and risk that automation might replicate rather than avoid. As one summary put it, However the IIHS study is interpreted, it suggests that If the technology continues to mimic human-style driving, it will leave a large share of collisions untouched.

Phantom braking, mystery jams, and other AI quirks

Beyond headline crashes, some of the most unsettling behaviors in automated systems are the ones that engineers themselves struggle to explain. Phantom braking, in which a car suddenly slows or stops for no apparent reason, is a prime example. For a human driver following behind, an unexpected hard brake from a driverless car can trigger a rear-end collision, even if the automated system technically “avoids” hitting anything in front of it.

In her latest research, George Mason University Professor Missy Cummings highlights phantom braking as a behavior that many drivers may find concerning, noting that it can lead directly to rear-end collisions. Other researchers have documented mysterious traffic jams that seem to form around self-driving cars, with Experts initially attributing the slowdowns to human drivers following too closely, only to later consider that the automated vehicles’ own micro-adjustments might be to blame. As one analysis of AI risks put it, Experts still cannot fully explain why some of these events occur more often around self-driving cars than they do around cars driven by people, which makes it harder to design rules and expectations around them.

Lawmakers race to catch up with the technology

As these technical quirks and crash patterns come into focus, lawmakers are scrambling to update traffic laws that were written for a world in which a human was always in charge. The core challenge is assigning responsibility when a vehicle is doing most of the driving but a person is still sitting in the front seat. If a driverless car runs a red light or hits a pedestrian, should the ticket go to the owner, the manufacturer, or the software itself.

California has become an early test bed for this new legal thinking, with a law set to take effect that will hold driverless car companies accountable for traffic violations committed by their vehicles, a shift that reflects growing skepticism about leaving all the liability with individual drivers. That skepticism is not limited to the United States. In the United Kingdom, reporting has revealed that Elon Musk has engaged in a secret push to persuade officials to allow driverless Teslas on British roads, even as regulators weigh evidence that, Although there remains concerns over the safety of driverless cars, there is also evidence that Although they might be safer than human-driven alternatives. The result is a patchwork of rules and pilot programs that vary widely by jurisdiction, leaving both companies and consumers uncertain about what is allowed where.

The safety promise is still alive, but no longer unquestioned

For all the setbacks, I do not see the safety promise of driverless cars as dead. The technology has already shown it can reduce certain types of crashes, particularly those tied to human impairment or distraction, and regulators still talk about automation as a tool to reduce road injuries and fatalities. What has changed is the level of skepticism about sweeping claims and the willingness of authorities to intervene when systems fall short.

Earlier optimism from NHTSA officials that driverless cars could eventually lead to zero traffic fatalities has been tempered by the reality that, as the National Highway Traffic policy discussion now stresses, it is easy to overpromise what automation can deliver in messy real-world conditions. The probes into Tesla’s Self Driving Technology Under Probe After dozens of Incidents, the Waymo school bus investigation, and the criminal case against Cruise LLC in San Francisco all point to the same conclusion: driverless cars are not yet the flawless guardians of safety they were sold as, and regulators are no longer willing to take that promise on faith.

More from MorningOverview