Waymo has quietly shifted the way its driverless cars behave, dialing back some of the ultra-cautious habits that once defined its robotaxis in the name of smoother traffic flow. The company is betting that acting more like human drivers will make its service more usable, even if that means accepting maneuvers that feel less conservative than before.
That tradeoff goes to the heart of the self-driving promise: replacing human error with machine precision. When a company that once prided itself on extreme prudence starts loosening the reins, it raises a hard question for regulators, riders, and everyone sharing the road with these vehicles about what “safe enough” really means.
Waymo’s quiet pivot away from ultra-caution
Waymo built its reputation on robotaxis that erred on the side of hesitation, sometimes to a fault, stopping short of intersections, creeping gingerly around obstacles, and yielding even when it technically had the right of way. That behavior frustrated some riders and other drivers, but it also underlined the company’s core pitch that its software would be more careful than any human behind the wheel. Now, in an interview in Dec with the WSJ, Ludwick, a leader inside the company, acknowledged that Waymo has reprogrammed its vehicles to avoid the disruptions caused by that earlier, hyper-cautious style, a shift that signals a new willingness to trade some margin of conservatism for a more assertive presence in traffic, as described in detail in Waymo’s own explanation of the changes.
That kind of recalibration might sound technical, but it is a profound policy choice about how much risk society is willing to accept from automated systems. When Ludwick talks about avoiding disruptions, he is really talking about a new balance between the safety envelope that engineers can encode and the impatience of city streets where a car that waits too long at a four-way stop can trigger honks, dangerous passing, or even rear-end collisions. By loosening some of those constraints, Waymo is implicitly conceding that its earlier settings, while safer in isolation, may have created different hazards in the real world, and that the company now believes a slightly bolder robotaxi is the lesser of two evils.
From “robotic” to “human” on San Francisco streets
Passengers in San Franci have already noticed that Waymo’s cars feel different, and not always in a reassuring way. Where the vehicles once behaved with almost exaggerated politeness, riders now describe lane changes that cut in more decisively, merges that rely on other drivers yielding, and turns that squeeze through gaps a cautious human might decline. In a widely shared clip, Passengers in San Franci described how Waymo’s self-driving cars are suddenly acting a lot more human, a phrase that in this context covers everything from smoother acceleration to moments that feel uncomfortably close to the aggressive habits of everyday commuters, as captured in a video highlighting how Waymo’s behavior has shifted on the streets of San Franci.
As I see it, that evolution reflects a tension that has always haunted autonomous driving: people say they want machines that are safer than humans, but they also want them to blend into the flow of traffic that humans created. When a robotaxi hesitates at a yellow light or refuses to inch into a crowded intersection, it can feel “wrong” even if it is technically correct. By making its cars act more like the drivers around them, Waymo is trying to reduce that uncanny valley. Yet the more its vehicles mimic human assertiveness, the harder it becomes to argue that they are fundamentally different from the fallible behavior they were meant to replace.
The safety record Waymo is putting at risk
Before this shift, Waymo could point to a growing body of data suggesting that its automated driving system was significantly safer than human drivers. In one analysis of rider-only operations, the any injury reported crashed vehicle rate was measured at 0.6 incidents per million miles, a figure that compared favorably with human benchmarks and suggested that the company’s conservative driving style was paying off in fewer people getting hurt. Those Results, When considering all locations together, showed that 0.6 rate as a key indicator that the automated system was outperforming human drivers on the metric that matters most.
That track record is not just a bragging point, it is the social license that allows Waymo to operate robotaxis in dense cities at all. When a company can say its cars hurt fewer people per mile than the average human, regulators and the public have a concrete reason to tolerate the inevitable glitches and edge cases. By reprogramming its vehicles to be less cautious, Waymo is effectively running a live experiment on whether it can keep that 0.6 incidents per million miles figure from creeping upward. If the rate rises, even modestly, the company will have to explain why a smoother ride and fewer traffic disruptions were worth any additional injuries.
Why Waymo says it had to loosen up
From Waymo’s perspective, the old behavior was becoming untenable as its fleet scaled. A robotaxi that stops short at every ambiguity might be fine when there are only a handful of vehicles on the road, but once thousands of them share the same city, their collective caution can gum up traffic and provoke risky reactions from human drivers. In his Dec conversation with the WSJ, Ludwick framed the reprogramming as a necessary response to those real-world disruptions, arguing that the company had to tune its software so that its cars would not become rolling obstacles every time they encountered an impatient tailgater or a confusing construction zone, a rationale that aligns with the internal logic laid out in Waymo’s own account of why it changed course.
I understand that argument, and there is some truth to it. A vehicle that is too timid can create its own safety problems, especially in cities where drivers expect a certain level of assertiveness at merges, unprotected left turns, and busy crosswalks. Yet there is a difference between calibrating for realism and normalizing the kind of inching, blocking, and opportunistic moves that make human traffic so stressful in the first place. By framing the change primarily as a fix for “disruptions,” Waymo risks downplaying the fact that it is also a shift in its risk tolerance, one that deserves scrutiny from regulators and the communities where these cars operate.
Riders caught between comfort and concern
For the people actually sitting in the back seat, the new behavior is a mixed blessing. Some riders in San Franci have welcomed the more fluid driving, saying that the cars now feel less like overcautious robots and more like a competent human who knows how to navigate city traffic without constantly second-guessing every move. That can make the service feel more natural, especially for commuters who are used to the rhythms of crowded corridors like Market Street or the approaches to the Bay Bridge, where hesitation can mean missing light cycles or getting boxed out of lanes.
Others, though, have described moments when the robotaxi’s newfound assertiveness crosses into discomfort, such as squeezing into tight gaps in stop-and-go traffic or taking turns that feel rushed when pedestrians are nearby. When Passengers in San Franci talk about Waymo’s self-driving cars acting a lot more human, they are not always praising the change, as the video of riders reacting to these shifts makes clear in its portrayal of Passengers in San Franci reacting to the more human-like driving. I find that tension revealing: people want the convenience of a car that keeps up with traffic, but they also expect a machine to be better than the frazzled driver they see in the next lane.
The data gap regulators cannot ignore
One of the most troubling aspects of this shift is how little independent data exists to evaluate its impact in real time. The 0.6 incidents per million miles figure comes from a structured comparison of rider-only crash data to human benchmarks, but that analysis reflects a particular period and a particular configuration of Waymo’s software. Once the company reprograms its vehicles, the old numbers become a baseline, not a guarantee. Regulators and city officials need updated, disaggregated data that shows whether the new, less conservative behavior keeps the any injury reported crashed vehicle rate at or below that 0.6 level, or whether it starts to drift closer to human norms.
Without that transparency, the public is effectively being asked to trust that Waymo’s internal simulations and safety cases justify the change. I do not think that is enough when the stakes involve real people walking, biking, and driving alongside these cars. If Waymo is confident that its new tuning preserves or improves on the earlier safety record, it should be willing to publish updated Results, When considering all locations together, so that outside experts can verify that the automated driving system still outperforms human drivers by a meaningful margin. Anything less leaves a gap between the company’s assurances and the evidence needed to support them.
What “less safe” really means on the road
Describing Waymo’s reprogramming as making its robotaxis “less safe” does not necessarily mean that the cars are suddenly reckless or that they now crash more often than human drivers. Instead, it reflects a shift in the safety philosophy encoded in the software. Previously, the system was designed to avoid a wide range of low-probability risks, even at the cost of awkward interactions and occasional traffic snarls. Now, by design, it is more willing to accept certain close calls, tighter gaps, and ambiguous situations in order to keep traffic flowing and reduce the social friction that came with its earlier, more robotic style.
On the road, that can look like a car that edges into a crosswalk to signal its intent to turn, or one that accelerates through a yellow light rather than braking hard and risking a rear-end collision. Each of those choices can be defended individually, and in some scenarios they may even reduce specific types of crashes. But taken together, they represent a narrowing of the safety buffer that once separated Waymo’s behavior from the human drivers around it. When a company that once touted its extreme caution starts to normalize these tradeoffs, it is fair to say that its vehicles are now operating closer to the boundary of what most people would consider acceptably safe.
The broader stakes for autonomous driving
Waymo’s decision will reverberate far beyond its own fleet. Other companies building automated driving systems are watching closely to see whether a more assertive style leads to better rider satisfaction, fewer traffic complaints, and, crucially, whether regulators push back. If Waymo can show that its any injury reported crashed vehicle rate stays at 0.6 incidents per million miles or lower even after the reprogramming, it will strengthen the case that autonomous vehicles can be both practical and safer than humans. If the numbers worsen, it could fuel calls for stricter oversight and more conservative default settings across the industry.
For cities like San Franci that have become testbeds for robotaxis, the stakes are especially high. These places are not just customers of a new mobility service, they are partners in a live experiment about how much autonomy to grant machines in public space. As I weigh the evidence, I keep coming back to a simple principle: if autonomous vehicles are going to share the road with everyone else, they should be held to a higher standard than the average human driver, not a similar one. Waymo’s reprogramming may make its cars feel more at home in messy urban traffic, but unless the company can prove that its safety metrics remain clearly superior, the shift looks less like progress and more like a quiet step back from the promise that made robotaxis compelling in the first place.
More from MorningOverview