Morning Overview

Self-driving cars may fail for 1 simple reason: they don’t get people

Autonomous vehicles keep crashing into a problem that no software update can easily fix: the messy, unspoken social rules that govern how humans share the road. Regulators have pulled permits, federal agencies have issued consent orders, and researchers have published paper after paper warning that driving is less about physics and more about reading people. The pattern suggests that until self-driving systems can interpret a wave, a hesitation, or a jaywalker’s intent, the technology will struggle most in the places it is needed most.

When Machines Hide What Went Wrong

The clearest sign that autonomous vehicle programs misunderstand their relationship with the public may be how some companies handle the aftermath of crashes. The California Department of Motor Vehicles suspended Cruise LLC’s driverless permits, citing both public safety risk and misrepresentation of safety-related information. That decision followed an October 2, 2023 incident in which a Cruise vehicle was involved in a pedestrian-dragging crash. The company did not fully disclose what happened after the initial impact, a gap that regulators treated as a serious breach of trust rather than a minor paperwork error. The episode underscored that the legitimacy of autonomous driving programs depends as much on candor after something goes wrong as on engineering prowess before it does.

Federal authorities reached the same conclusion independently. The National Highway Traffic Safety Administration announced a consent order with Cruise after determining the company submitted incomplete Standing General Order crash reports that omitted post-crash details from the same pedestrian incident. The missing information was not trivial. It involved the sequence of events after the vehicle struck a person, exactly the kind of detail that first responders, insurers, and regulators need to evaluate whether a system behaved safely. When a company building cars that are supposed to protect people fails to report how a person was harmed, the gap is not just technical. It is a failure to account for the human consequences of automation, and it reinforces public suspicion that crucial facts will only emerge under regulatory pressure.

Automation Complacency and the Uber Precedent

The Cruise episode did not emerge in a vacuum. Five years earlier, a vehicle controlled by Uber ATG’s developmental automated driving system struck and killed a pedestrian in Tempe, Arizona. The National Transportation Safety Board investigated the 2018 crash and documented a troubling set of human-factors failures: operator monitoring breakdowns, automation complacency, and organizational safety culture shortcomings. The safety operator behind the wheel had been looking at a phone. The system itself had detected the pedestrian but could not classify her quickly enough. And the company had not built adequate safeguards to prevent exactly this kind of lapse, such as robust driver monitoring or conservative fallback behaviors when the system was uncertain.

What makes the Uber case instructive beyond its tragedy is the way it exposed a feedback loop. The more a system appears to work, the less attention its human minders pay. That dynamic, often described as automation complacency, is well documented in aviation and industrial safety, where overreliance on autopilot or process controls can dull human vigilance. On public roads, however, the stakes involve bystanders who never consented to be part of a test and may have no idea that an experimental system is in control. The NTSB’s findings suggest that the problem was not a single distracted operator but a program-wide failure to treat human unpredictability (both inside and outside the vehicle) as a design constraint rather than an edge case, and to recognize that partial automation can actually magnify certain human weaknesses instead of compensating for them.

Driving as Social Negotiation

Academic research offers a framework for why these failures keep recurring despite better sensors and faster chips. A review on arXiv, titled “Social Interactions for Autonomous Driving,” frames ordinary traffic as social interaction involving implicit communication, shared norms, intention inference, and interaction-heavy scenes like uncontrolled intersections and school zones. Human drivers constantly read body language, make eye contact with pedestrians, and adjust speed based on subtle cues that no lidar sensor currently captures. The review argues that autonomous systems need evaluation metrics built around social compatibility (how well they fit into human flows and expectations), not just collision avoidance and lane-keeping statistics.

A separate technical paper, “PORCA: Modeling and Planning for Autonomous Driving among Many Pedestrians,” tackles the same problem from a planning perspective. The PORCA framework models pedestrian intention and local interactions, treating each person on a sidewalk or crosswalk as an agent whose next move is only partially observable. The insight is simple but hard to engineer. Intention is not a fixed data point. It shifts in real time based on what the pedestrian sees the car doing, and the car’s behavior in turn responds to the pedestrian’s latest move. That reciprocal quality means a self-driving vehicle cannot simply predict where a person will be in two seconds; it must also account for how its own acceleration, braking, or lane position changes what that person decides to do next. Designing for this loop pushes autonomous driving away from static prediction and toward continuous negotiation, closer to how humans actually drive.

Why Good Crash Numbers Can Mislead

Some companies have tried to answer safety concerns with data. A preprint by Waymo-affiliated authors compares rider-only crash records over millions of miles to human benchmarks, drawing from NHTSA Standing General Order crash reporting. The numbers in that analysis look favorable for the autonomous fleet, especially for minor collisions and property-damage-only events. But a companion paper from the same research ecosystem warns that such comparisons can create what the authors call a “credibility paradox”: the safer the system appears on average, the more shocking and trust-eroding any rare, high-severity crash becomes. That safety readiness discussion argues that aggregated crash-rate comparisons may obscure the very incidents that matter most for public confidence, and that meaningful readiness requires transparent accounting of edge cases, near misses, and system limitations.

This is where the “don’t get people” problem loops back to policy. NHTSA established its crash-reporting regime for vehicles equipped with advanced driver assistance systems and automated driving systems with a clear rationale: greater transparency and timely notification to regulators. The Standing General Order data, however, comes with significant caveats, including unverified initial reports, inconsistent definitions, and differences in telemetry across manufacturers. When companies cherry-pick favorable windows of data while underreporting messy incidents, the regulatory infrastructure designed to build public confidence instead erodes it. A 2024 Brookings Institution analysis of self-driving policy warns that even as crash rates improve, serious injuries and fatalities remain central public concerns, and that regulators must focus on how these systems behave in complex, mixed-traffic environments rather than relying on aggregate mileage statistics.

Designing for Human Expectations, Not Just Edge Cases

Taken together, the regulatory actions, crash investigations, and research literature point to a common lesson: autonomous vehicles are colliding not just with pedestrians and other cars, but with the social fabric of driving itself. The Cruise permit suspension shows what happens when companies obscure uncomfortable details, undermining the very transparency that NHTSA’s reporting regime is meant to foster. The Uber fatality investigation demonstrates that partial automation can lull human overseers into inattention, exposing a gap between how a system performs in routine conditions and how it behaves when something unexpected happens. And the social-interaction research highlights that much of what we call “defensive driving” is really shared improvisation, governed by norms that machines do not yet reliably read.

Closing that gap will require reframing safety away from narrow technical milestones and toward sustained alignment with human expectations. That means designing automated systems that default to conservative behavior when intent is uncertain, building organizational cultures that treat every serious incident as an opportunity for public learning rather than reputational triage, and developing evaluation methods that capture how well vehicles integrate into the tacit choreography of streets and sidewalks. It also means regulators continuing to insist on complete, timely crash reporting and being willing to halt deployments when companies fall short. Autonomous vehicles may eventually navigate the physical world better than people do, but until they can also navigate the social world of driving, and be candid when they fail, their most stubborn obstacle will not be perception or planning algorithms. It will be trust.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.